How can I detect when error correction corrects something or turn off error correction in wireless communication - wifi

I'm working on a project in which I basically want to test how many errors happen in wireless communication.
My first thought was to use ESP8266 chips, but then I came to the realization that wifi has ways of correcting errors before they reach the other device. So I was wondering if there's a way to either stop this correction from happening or detect it.
I also thought of making use of the Micro:Bits' radio feature, since they're easily available to me and they don't use wifi. The problem here is that I can't find anywhere that says whether Micro:Bits have error correction codes or not.
If there's another possibility that I'm not seeing, that would definitely let errors through, I would very much like to know.
If all else fails, I could still use antennae, but that would bring a lot of other problems with it for what I'm trying to do.

Related

Device tracking without wifi or gps

This isn't necessarily a specific code question, but more just hoping that someone might be able to point me in the right direction. I'm making an outdoors/hiking app and one of the features I'd like to build is a trail progress tracker. I've seen other apps that do this, namely maps.me, and it somehow is able to continue to track your location on a topo map even after gps and wifi are inaccessible. I've been trying to research information on how this is done but coming up with nothing, possibly because there's maybe some terminology involved that I'm not familiar with. If anyone can offer any links or hints as to what I should be looking up in relation to this, that would be fantastic.
Thanks!

Using Twisted to track GPS Locations on an iPhone

Recently, while developing an app on the iPhone, I came across the problem of tracking vehicles. It was easy to track the vehicles on a map if they were stationary using Parse ( although not sure if it was the best method) but the issue was tracking vehicles if they were moving. I didn't want to query for geopoints in Parse unnecessarily if the location of the vehicle did not change. I was steered towards using Twisted, and after doing some investigation, realized this might be a solution. Using the reactor loop, when locations were changed I could notify the other users and update their maps appropriately. Conceptually, I understand this problem but having trouble finding information or help regarding GPS with twisted.
I currently have been running the gps example from the site, http://twistedmatrix.com/documents/12.0.0/core/examples/gpsfix.py
Using my MacBook pro to test, I found the available serial port and it attempts to open as a NMEAReciever but I was expecting a GPS location to be written. Once I can understand how to interact with the GPS, I feel I could tackle communicating this information through the iPhone with NSStreams such in the fashion of this tutorial except instead of sending text messages, it will be sending GPS locations
http://www.raywenderlich.com/3932/networking-tutorial-for-ios-how-to-create-a-socket-based-iphone-app-and-server
Overall, my question is how can I access the GPS coordinates of a device using Twisted through the tutorial provided. I hope my question was detailed enough and I would be more than happy to correspond with someone any more details. Thank you
I (eventually) wrote twisted.positioning, which is essentially a better version of the twisted.protocols.gps thing you're using. It has much nicer abstractions over concepts like positions, as well as receivers. That may be interesting to you, because it provides abstractions that you can use to e.g. combine information from GPS and other sources (like compass). However, I think that in iOS-land, that job is already (mostly) handled by Core Location. I'd assume that the best course of action is too hook that up to twisted.positioning (shouldn't be particularly difficult, can't be anywhere nearly as hard as NMEA is, at least!). Lacking iOS development experience, I can't tell you how to access Core Location from Python; I can only point at the docs.
twisted.positioning is also an improvement when it comes to documentation. Unfortunately, that wasn't very difficult, because its predecessor came with none at all. I hope the one scant example that is provided helps, though; and I'd be more than happy to elaborate if it doesn't.

Use both GPS and Scan function in same channel with Junaio AR

As the title says, im simply wondering if it is possible to use the GPS tracking and POI part of junaio, and att the same time use the scan functionality to scan and recognize images. Im working with a group at a project which demands that we use both functionalities, and we are at the moment stuck on trying to send 2 XML documents, causing the server to return nothing at all. I simply want to know if it is possible to use both functionalities in the same channel, and I would greatly appriciate if someone would point me in a direction which could help me solve our problems, since I've been able to find absolutley nothing on my own. Thanks beforehand!
Scan + GPS/compass is not possible at the moment.
However, it's possible to use GPS/compass tracking and continuous visual search at the same time. This might be the closest thing to your requirements.
You might find more information on http://helpdesk.metaio.com

Is there an iOS library available to compute connection latency and download rate?

I would like to have an idea of the "network quality" from my app to adapt the user experience.
I know that it is possible to know if connection is WIFI or WWAN (EDGE/3G)... but it seems not possible to make the difference between 3G and edge...
My idea is to "test the connection" by computing latency and download rate.
Before make the library myself, is thee available library for that ?
Of course I need to use "official solution" without using private APIs...
There's no lib AFAIK, because it's a very complex problem. If you were to only calculate the download speed, how would you compare this to the app responsiveness ? The latter is eventually what is important to the user; download speed is only one factor in a more complex equation. Also, how would you react to temporary network disruptions or slugginess ? (fairly common on Mobile network), they do not necessarily imply a fallback onto lighter images..
It's a pandora's box in my view. I'd tend to design an app for the worst case scenario if I were you. Small images, compatible with good view quality, and bullet proof user-notifications when the ISO stack says that something went wrong (timeout, wrong url, no response from server etc...), meaningful transitions from low pic to high pic res, ability to cancel an ongoing request, progress bars etc..
When the user is notified about what's going on, he/she will avoid/understand edge cases situations, and will not blame on the app if something goes wrong.
Now, if you want to address the app performance paradigm, I've started to look into https://newrelic.com/docs/site/apdex (I have no stake into their business, it's provided as an option with my MBaas). Seems there are good things, but I haven't integrated any of their features yet.
good luck !

bluetooth communication in nxj

I'm nxj beginner.
I have some questions about bluetooth communication between PC and brick.
First, when bluetooth communication occurs, where is the birthplace processing this datas?
In other words, I want to know whether these datas will be processed on CPU or brick.
Second, what is exact roles CPU and brick in bluethooth communication?
That means what is processed on CPU and what is processed on brick.
I have searched almost web site but I can't find this anywhere.
Please help me. Thanks.
You can see it in the package structure.
lejos.nxt.*
This package contains classes running on the NXT-brick. All code in this package will be compiled for the brick and will run on the brick.
lejos.pc.*
Here the difference is not that clear. This is java-code you compile for personal computer. So most code runs on your computer. But some classes (e.g: RemoteMotorController) only send messages to the NXT-brick which gives commands to the motors.
lejos.pc.comm provides API's that allow you to communicate/control the nxt robot from the PC.
When importing the the libs to an Android project, it allows you to build an instance of the same environment used on a pc, but within android.
I agree it can be tough finding some things out. It would be great if there was as stronger lejos presence on SO
This question is months old and has remained un-answered I actually have a lot of questions about it myself, but I might be able to provide some insight for utter novices.
when using bluetooth with Android and NXJ robots, you use either lejos.pc.comm or lejos.NXJ.
Both provide APi's to do almost the same thing, but work a little differently. I don't know nearly enough about the NXJ api, but I do know that it is the one that lets you manipulate the robot much more effectively, such as outputting data to it's LCD screen, which you can't do with the pc.comm api
As far as I can tell, the pc.comm API uses both Android Bluetooth API's and it's own protocols to allow communication with Lego LCP commands.
(I want to come back to this, but I'm writing a dissert on the topic so I'll try to update it in a couple of days. Seems not many are interested though, shame)

Resources