Cooja - In-Building Models & Mote Choice - contiki

I'm currently working on a 3-months projects based on Contiki-NG and Cooja at the university and I have to create In-Building models and I would like to ask two questions please :
1) Is it possible to add an obstacle in Cooja like a wall or something else or do you know any similar tool able to do that ?
2) In Cooja, what is the closest mote I can use to simulate a Texas Instrument CC2650 LaunchPad ? (z1 mote isn't available in the new version of Contiki-NG)
Thanks!

1) Yes, you can have obstacles in Cooja. For that you need to use the MRM radio medium (Multi-path Ray-tracing radio medium). There is not a lot of documentation, but read the documenting comments in the source code and try it out.
2) It is not possible to simulate the hardware-level details of CC2650. Try using Cooja motes: their support is much improved in Contiki-NG. Also, we are going to add another msp430-based mote with more RAM in the next release of Contiki-NG (v4.2).

Related

Programming a drone to flight indoor using opencv

I am newbie with drones. I would like to develop a program to manage a drone using opencv to fly indoor over a line.
I am searching a lot of languages but most all of them are GPS based. I saw there is an alternative which calls SLAM to detect the position using the sensors.
Well I have a line in the floor and a camera on my drone. I like mission planner but I am not quite sure if it is the best choice. I will be using Parrot AR, but I would like to use any drone.
So I would like to use mission planner but I am not sure if it is the best choice.
What would be the best SDK you would recommend me to use in order to manage the drone not using the GPS points but relative locations or SLAM?
Well, you have the Parrot API ,and a couple of wrappers in different languages. Node-AreDrone for nodeJs, PyArdrone for python, and there is a wrapper coded in C# which I have used AR.Drone. It has a good user interface which you can see the both cameras, record and replay the videos, control the drone by clicking on buttons, you can see the metrics and configuration of the drone and you have also a way to send commands in a queue. Because I love c# and the features I've mentioned you have already in a user interface, I prefer this. Most of them are quite the same as they use the Parrot API inside by sending udp messages. I couldn't try others, so, there are a lot, and anybody could tell me which one is the best. For mission planner I couldn't find a good solution for indoors. So, for anyone who is lost and do not know here to start as I was. I recommend to select the language you want and search for the corresponding wrapper. If you like c# as me, so AR.Drone is a good choice.
Also if you want to do something with OpenCV. Copterface is a good example. You could implement it in any language with OpenCV.

Possible use case/real application for mobile distributed version of Tensorflow?

I'm developing this project where I'm trying to create a distributed version of Tensorflow (the actual open source version is single node) and where the cluster is entirely composed by mobile devices (e.g. smartphones).
In your opinion, what is a possible application or use case where this could be useful? Can you give me some example please?
I know that this is not a "standard" Stack Overflow question, but I didn't know where to post it (if you know a better place where to post it, please let me know it). Thanks so much for your help!
http://www.google.com.hk/search?q=teonsoflow+android
TensorFlow can be used for image identification and there is an example using the camera for Android.
There could be many distributed uses for this. Face recognition, 3D space construction from 2D images.
TensorFlow can be used for a chat bot. I am working towards using it for a personal assistant. The Ai on one phone could communicate with the Ai on other phones.
It could use vision and GPS to 'reserve' a lane for you on the road. Intelligent crowd planned roads and intersections would be safer.
I am also interested in using it for distributed mobile. Please contact me with my user name at gmail or Skype.
https://boinc.berkeley.edu
I think all my answers could run on individual phones with communication between them. If you want them to act like a cluster as #Yaroslav pointed out there is Seti#home and other projects running in the BOINC client.
TensorFlow could be combined with a game engine. You could have a proceduraly generated Ai learning augumented reality game generating the story as multiple players interact with it. I have seen research papers for each of these components.

How to integrate arduino board with apple home kit, Is it possible or not ?

I want to make my own ios app with homekit, which should control arduino, i have studied about homekit and i have doubt that whether it is possible to integrate arduino or raspberry PI with home kit or not ? any useful links ?
I'm not sure about the arduino, the crypto behind the protocol is fairly complex and I'm not sure the processor could handle it well. I can find any sample projects out there either, but the Rasbery pi is another story. Since the Pi can run node, there is a node implementation of hap on github: https://github.com/KhaosT/HAP-NodeJS
I haven't used it, but it's fairly well documented. I doubt there are any tutorials available, I think these are fairly fringe projects at the moment, so you're going to have to get your hands dirty. Good luck.

"Sound" Recognition in Swift?

I'm working on an applicaion in Swift and I was thinking about a way to get Non-Speech sound recognition in my project.
I mean is there a way in which I can take in sound inputs and match them against some predefined sounds already incorporated in the project and if a match occurs, it should do some particular action?
Is there any way to do the above? I'm thinking breaking up the sounds and doing the checks, but can't seem to get any further than that.
My personal experience follows matt's comment above: requires serious technical knowledge.
There are several ways to do this, and one is typically as follows: extract some properties from the sound segment of interest (audio feature extraction), and classify this audio feature vector with some kind of machine learning technique. This typically requires some training phase where the machine learning technique was given some examples to learn what sounds you want to recognize (your predefined sounds) so that it can build a model from that data.
Without knowing what types of sounds you're aiming for to be recognized, maybe our C/C++ SDK available here might do the trick for you: http://www.samplesumo.com/percussive-sound-recognition
There's a technical demo on that page that you can download and try with your sounds. It's a C/C++ library, and there is a Mac, Windows and iOS version, so you should be able to integrate it with a Swift app on iOS. Maybe this will allow you to do what you need?
If you want to develop your own technology, you may want to start by finding and reading some scientific papers using the keywords "sound classification", "audio recognition", "machine listening", "audio feature classification", ...
Matt,
We've been developing a bunch of cool tools to speed up iOS development, specially in Swift. One of these tools is what we called TLSphinx: a Swift wrapper around Pocketsphinx which can perform speech recognition without the audio leaving the device.
I assume TLSphinx can help you solve your problem since it is a totally open source library. Search for it on Github ('TLSphinx') and you can also download our iOS app ('Tryolabs Mobile Showcase') and try the module live to see how it works.
Hope it is useful!
Best!

Is it a good idea to use a single-board-computer in a UAV robot?

I'm not sure it's good or bad, the robot should have computer vision for SLAM. What's your idea?
It is a great idea to use a Single Board Computer and System on Modules for developing a UAV robot. In fact, it has already been successfully implemented before. I remember seeing a similar implementation of using a Toradex Colibri on Iris Carrier Board for an entry to the Embedded Design Challenge contest by Toradex. Here are the details of the project including the video, updates and source codes at the following link: http://www.challenge.toradex.com/projects/10099-tdxcopter
Yes, that's how we did it when I was in school (albeit nine years ago). You want to focus on algorithms, not learning to program an unfamiliar platform.
Assuming the "A" stands for aerial, don't invest in anything you don't want crashing at high speed. And mind the vibrations.

Resources