bluetooth communication in nxj - lejos-nxj

I'm nxj beginner.
I have some questions about bluetooth communication between PC and brick.
First, when bluetooth communication occurs, where is the birthplace processing this datas?
In other words, I want to know whether these datas will be processed on CPU or brick.
Second, what is exact roles CPU and brick in bluethooth communication?
That means what is processed on CPU and what is processed on brick.
I have searched almost web site but I can't find this anywhere.
Please help me. Thanks.

You can see it in the package structure.
lejos.nxt.*
This package contains classes running on the NXT-brick. All code in this package will be compiled for the brick and will run on the brick.
lejos.pc.*
Here the difference is not that clear. This is java-code you compile for personal computer. So most code runs on your computer. But some classes (e.g: RemoteMotorController) only send messages to the NXT-brick which gives commands to the motors.

lejos.pc.comm provides API's that allow you to communicate/control the nxt robot from the PC.
When importing the the libs to an Android project, it allows you to build an instance of the same environment used on a pc, but within android.
I agree it can be tough finding some things out. It would be great if there was as stronger lejos presence on SO
This question is months old and has remained un-answered I actually have a lot of questions about it myself, but I might be able to provide some insight for utter novices.
when using bluetooth with Android and NXJ robots, you use either lejos.pc.comm or lejos.NXJ.
Both provide APi's to do almost the same thing, but work a little differently. I don't know nearly enough about the NXJ api, but I do know that it is the one that lets you manipulate the robot much more effectively, such as outputting data to it's LCD screen, which you can't do with the pc.comm api
As far as I can tell, the pc.comm API uses both Android Bluetooth API's and it's own protocols to allow communication with Lego LCP commands.
(I want to come back to this, but I'm writing a dissert on the topic so I'll try to update it in a couple of days. Seems not many are interested though, shame)

Related

Getting started - creating an iPhone app that controls another (non-iOS) device via bluetooth commands

All,
Apologies in advance - this question might be too open-ended for SO.
Anyway... A friend of mine (an engineer and entrepreneur) is in the process of building a high-tech piece of lab equipment. He's asked me about the feasibility of building an iPhone/iPad/iPod application that would allow users to control the device via Bluetooth, so I'm helping him gather some information. I'm hoping to get a few pointers on how to get started. Specifically:
Would this require a native app, or could this be accomplished with HTML5 (with or without something like PhoneGap?)
Can you point me to a good primer on bluetooth networking? Everything I've found assumed a VERY high level of pre-existing knowledge.
What are the basics on how something like this is accomplished? Is there a single, established protocol for how one device "controls" another, or is bluetooth more like SSL - just a pipe that allows you to convey any type of message?
I realize this question is incredibly broad and detailed - so I'm not really looking for specifics. But obvious Google searches don't turn up much, and I'm otherwise having a hard time finding a good starting point.
Thanks in advance.
You can communicate via bluetooth in two ways: One is using the Low Energy Bluetooth capabilities of iOS 5 and newer iPhone/ipads.
https://developer.apple.com/library/ios/#documentation/CoreBluetooth/Reference/CoreBluetooth_Framework/_index.html#//apple_ref/doc/uid/TP40011295
Unfortunately the documentation is sparse and will require some hacking away. If you choose this route I would consider starting here and learning as much as you can about how the protocols work before hacking into the framework:
http://developer.bluetooth.org/gatt/services/Pages/ServicesHome.aspx
The limitations of this route are that it might not be best for sending a lot of data. I have only built stuff that sent simple commands which it does work great for.
The other option is the external accessory framework. This will require you to get an mfi license from apple (not fun). You will also need to pay royalties. But it will do what you want. You won't need to concern yourself much with underlying protocols if you use this, the framework provides a friendly api for processing streams.
http://developer.apple.com/library/ios/#documentation/ExternalAccessory/Reference/ExternalAccessoryFrameworkReference/_index.html

How to write a software to sync files to ipad

I have this idea of writing an application to automatically sync files to a specific place for an ipad every time the ipad is plugged in the computer.
The problem is I've never developed a software like this before. Right now I have these two big questions:
- How to detect when an ipad is plugged in the computer?
- How to connect to and copy files over the ipad?
To make things clear, the application I want to develop should have similar functions like iTools (not iTunes).
Does anyone here have experiences in developing this kind of application? Would you please share with me how to start with this project, because I'm clueless :(
There is a rather simple option; use a internet based service to accomplish this task - just as DropBox, iCloud and similar services do it already. Maybe you can get a lot closer to your goals by simply connecting to the API of DropBox, SugarSync or alike.
Using a direct (USB-) connection to the device will be rather tough to implement and, to my knowledge, will prevent you from selling the resulting software through Apple's channels. I am not saying that it was impossible (see iExplorer) but I am saying that such endeavor will involve a lot of reverse engineering of undocumented functions to a degree that might be considered illegal in certain countries. Additionally, maintaining such software will be very demanding as Apple frequently introduces changes within their communication protocol/s.

What are the steps should be taken to make sure that the OpenCV code running on PC will run on a particular embedded device?

I want to port a good OpenCV code on an embedded platform. Earlier such stuffs were very difficult to perform but now TI has come up with nice embedded platforms which are comparatively hassle free as they say.
I want to know following things:
Given that :
The OpenCV code is already running on PC smoothly. (obviously)
Need to determine these before purchasing the device.
Can't put the code here in stackoverflow. :P
To chose from Texas Instruments: C6000.
Questions:
How to make it sure that the porting will be done?
What steps to be taken to make it sure that after porting the code, will run (at least).
to determine whether the code might require some changes to make its run smooth.
The point 3 above is optional.
I need info which will at least give me some start up in this regard.
What I thought I should do?
I am to list the inbuilt functions down.
Then to find available online bench marking for those functions for the particular device like as shown towards the end of this doc.
...
Need to know how to proceed further?
However C6-Integra™ DSP+ARM Processor seems the best.
The best you can do is to try a device simulator (if it is available), but what you'll see there is far from perfect.
Actually, nothing can tell you how fast and how well the app will run on the embedded device before running you specific app on that specific device.
So:
Step 1 Buy it
Step 2 Try it
Things to consider:
embedded CPU architecture: Your app needs a big cache? how big is the embedded cache?
algorithm: do you use a lot of floating point operations? how good is the device at floating point ops?
do you have memory transfers? data bus on a PC is waaay faster than on embedded
hardware support: do you use a lot of double-precision calculations? they are emulated on ARMs. They are gonna kill your app (from millisecons on a PC it can go to seconds on a ARM)
Acceleration. Do your functions use SSE? (many OpenCV funcs are SSEd, even if you don't know). Do you have the NEON counterpart? (OpenCV does not have much support for that). The difference can be orders of magnitude from x86 SSE to embedded without NEON.
and many, many others.
So, again: no one can tell you how it will work. Just the combination between the specific app and the real device tells the truth.
even a run on a similar device is not relevant. It can run smoothly on a given processor, and with another, with similar freq or listed memory, it will slow down too much
This is an interesting question but run is a very generic word in this context, therefore I feel the need to break it down to other 2 questions:
Will it compile in an embedded device?
Will it run as fast/smooth as in a PC?
I've used OpenCV in a lot of different devices, including ARM, SH4, MIPS and I found out that sometimes the manufacturer of the device itself provides a compiled version of OpenCV (for my surprise), which is great. That's something you can look into, maybe the manufacturer of your device provide OpenCV binaries.
There's no way to know for sure how smooth your OpenCV application will be on the target device unless you are able to find some benchmark of OpenCV running in there. PCs have far better processing power than embedded devices, so you can expect less performance from the target device.
There are 3rd party applications like opencv-performance, that you can use to test/benchmark the environment once you get your hands on it. And if performance is such a big deal in this project, you might also be interested in this nice article which explain some timing tests done on couple of OpenCV features comparing implementations using the C and C++ interfaces of OpenCV.

Emulate GPS or a serial device

Is it possible to get location data out of Google Gears, Google Gelocation API or any other web location API (such as Fire Eagle) in such a format that it appears to other software as a GPS device?
It occured to me reading these answers to my question regarding WiFi location finding, on Super User, that if I could emulate a GPS unit, many of these web services could act as a 'poor-mans' GPS to otherwise less useful software that requires it.
Is GPSD an option?
Preferably OSX & Python, but I would be interested in any implementation.
There is a very similar thread on a Python mailinglist that mentions Windows virtual COM ports and discusses Unix's pseudo-tty capabilities. If the app(s) you want to use let you type in a specific tty device file, this may be the easiest route. (Short of asking the authors to provide a plugin API for what you're trying to do, or buying yourself a $20 bluetooth GPS mouse.)
Are you using OS X?
There is a project macosxvirtualserialport on Google code that provides a graphical wrapper around some of the features of a utility called socat. I'd recommend taking a look at socat if you see potential in the pseudo-tty route. I believe you could use socat to link a pipe from a Python program to a pseudo-tty.
Most native Mac apps will be querying IOServiceMatching for a device with kIOSerialBSDRS232Type, and I doubt that a pseudo-tty will show up as an IOKit service.
In this case, unless you can find a project that has already implemented such a thing, you will need to implement a driver as described in this How to create virtual COM port thread. If you're going to the trouble of create a device driver, you would want to base it on IOKit because of that likely IOServiceMatching query. You can find the Apple16X50Serial project mentioned in that post at the top of Apple's open source code list (go to the main page and pick an older OS release if you want to target something pre-10.6).
If your app is most useful with realtime data (e.g. the RouteBuddy app mentioned in the Python mailinglist thread can log current positions) then you will want to fetch updates from your web sources (hopefully they support long-polling) and convert them to basic NMEA RMC sentences. You do not want to do this from inside your driver code. Instead, divide your work up into kernel-land and user-land pieces that can communicate, and put as little of the code as possible into the kernel part.
If you want to let apps both read and write to these web services, your best bet would probably be to simulate a Garmin device. Garmin has more-or-less documented their protocol in the IntfSpec.pdf file included with their Device Interface SDK. Again, you'd want to split as much as you could into user-space code.
I was unable to find a project or utility that implements the kernel side of an IOKit-based virtual serial interface, but I'd be surprised if there wasn't one hiding somewhere out there. Unfortunately, most of the answers I found to that question were like this, with the developer being told to get busy writing a kext.
I'm not exactly sure how to accomplish what you're asking, but I may be able to lend some insight as to how you might begin to get it done. So here goes:
A GPS device shows up to most systems as nothing more than a serial device -- a.k.a. a COM port if you're dealing with Windows, /dev/ttySx if you're in *nix. By definition, a serial port's specific duty is to stream data across a bus, one block at a time. So, it would then follow logically that if you want to emulate the presence of a GPS device, you should gather the data you're consuming and put it into a stream that somehow acts like an active serial port.
There are, however, some complications you might want to consider:
Most GPS devices don't just send out location data; there's also information on satellite locations, fix quality, bearing, and so on. Then again, nobody's made any rules saying you have to make all that data available. There's probably more to this, but I'll admit that I need to do more research in this area myself.
I'm not sure how fast you can receive data when dealing with Google Latitude, etc., but any delays in receiving would definitely result in visible pauses in your "serial port"'s data stream. Again, this may not be as big a complication as it seems, because GPS devices are known to "burst" data across the bus anyway, but I'd definitely keep an eye on that. You want to make sure there's always a surplus of data coming across, not a shortage.
Along the way you'll also have to transform the coordinates you receive into valid GPS sentences, as well. You can find specifications for those, but I would definitely make friends with the NMEA standard -- even though it is a flawed standard, it's the one everyone seems to agree on anyway.
Hope this helped you, at least a little bit. Are there anymore details specific to your problem that you think could be useful in answering this question?
Take a look to Franson GPS Gate which allows you to connect to Google Earth among other things (like simulating GPS and so on). Is windows only though but I think you could get some useful ideas from it.
I haven't looked into it very much, but have you considered using Skyhook's SDK? It might provide you with some of what you are looking for. It's available for every major desktop and mobile OS.

Carmen Robotics

I have been working with Carmen http://carmen.sourceforge.net/ for a while now, and I really like the software but I need to make some changes inside the source code.
I am therefore interesting in some students reports/projects there have been working with Carmen, or any documentation of the source code.
I have been reading the documentation on the webpage for Carmen, but with all respect I think the literature there is a bit outdated and insufficient.
ROS is the new hot navigation toolkit for robotics. It has a professional development group and a very active community. The documentation is okay, but it's the best I've seen for robotic operating systems.
There are a lot of student project teams that are using it.
Check it out at www.ros.org
I'll be more specific on why ROS is awesome...
Built in visualizer/simulator rviz
- It has a record function which will record all of the messages passed out of nodes, this allows you take in a lot of raw data store it in a "ros bag" and then play it back later when you need to test your AI, but want to sit in your bed.
Built in navigation capabilities,
-all you have to do is write the publishers of data for your sensors.
-It has standard messages that you need to fill out so that the stack has enough information.
There is an Extended Kalman Filter which is pretty awesome because I didn't want to write one. Currently implementing it, i'll let you know how that turns out.
It also has built in message levels, by that I mean you can change which severity of print messages are printed during runtime, fairly handy for debugging.
There's a robot monitor node that you can publish the status of your sensors to and it bundles all of that information into a GUI for your viewing pleasure.
There are some basic drivers already written. For example SICK lidars are supported right out of the box.
There is also a built in transform function, to help you move everything to the right coordinate system.
ROS was made to run across multiple computers, but can work on just one.
Data transfer is handled over TCP ports.
I hope that's more helpful.

Resources