How can I provide mobility to motes in cooja? - contiki

I would like to assign mobility to motes already deployed. The deployment method can be anyone among the given options given in cooja. Can anyone please tell me how to assign mobility?

As far as I know there currently is no working mobility plugin. Currently I see the following options:
You can use the test-script to move your motes around. This needs some looking in to the Cooja code, but shouldn't be too difficult. Especially when taking the examples from the wiki as staring point.
You can get the mobilty plugin working again. Shouldn't be too much work either and lots of people will thank you. Adjusting the name-spaces should get you quite far, but it might be a bit more.
If you can derive connection-properties from your mobility-model, you can use realsim feed these into Cooja.

Related

how to pretrain my image using resnet50 in mask-rcnn

I am researching about mask r-cnn. I want to know how to pretrain my image(knife,sofa,baby,.....) using resnet50 in mask-rcnn. I struggle to find that in github, but I can't. Please help me anybody who know how to handle it.
Try this implementation of Mask RCNN on github here.
You can follow Mask_RCNN github repo. It has both resnet50 and resnet100 (might be wrong here). It is a beautiful implementation I would say. The base model is from FAIR (Facebook AI Research). There is a demo file which you can check before starting your work.
If it works well, you can see my answer, it will help you to train the model with your custom data. The answer is a bit long, but it lists all the steps.
Something which I personally like about this implementation is:
It is easy to setup. Won't bother you much about the dependencies. Having a python virtual environment does the wonders.
It falls back automatically from a CPU version to GPU and vice versa.
It is having good support from its developers. It is getting commits frequently.
The code is very customisable. So If you want to do some changes, it's pretty easy. Some booleans and numbers changes up and down and you are done...!!!

Tools for searching full text in iOS bundle

Sorry for the generalized question...I have been hunting for a long time and haven't found anything I can use or easily adapt yet. I'd really appreciate any pointers!
I'm building a reference app that will contain several textbooks in plain-text format. I want the user to be able to perform a search, and get a table back with a list of results. I have a working prototype, but the search logic that I wrote isn't all that smart and it's been hell trying to make it better.
This is obviously a fairly common problem so I'm looking for a tool that I could adapt to the task. So far I've found Lucene (http://vafer.org/blog/20090107014544/) and Locayta (http://www.locayta.com/iOS-search-engine/locayta-search-mobile/)
Lucene appears to have been last updated for iOS 2...I don't even know if I'll be able to rework it myself. Maybe.
Locayta would probably work great, but a commercial license is $1,000 and I may not soon recoup that with this app, as it's a niche market.
Thanks!
We stumbled upon the same predicament where I work, and have yet to decide on a solution.
Locayta seems promising, but barring that, I've looked into SQLite's FTS3/FTS4 as well.
The only issue seemed the lack of a way to match partial words. It's easy to search for fields that contain whole words (eg. "paper" matches "printer paper", "paper punch", and "sketch paper"), or words that start with something (eg. "bi*" matches "binder", and "bicycle"), but there's no built in way to match a suffix.
If you don't require that functionality, FTS3/FTS4 might work.
I see you mentioned in the follow-up that your SQLite didn't recognize FTS3(), and I had the same issue at first.
Apparently it's not bundled into the iOS version by default, instead you have to download the SQLite3 amalgamation, and include it in the project manually. As found at is FTS available in the iOS build of SQLite?
Also note, the SQLITE_ENABLE_FTS3 variable is not enabled by default, you just have to add it to the configuration as detailed at http://www.sqlite.org/fts3.html#section_2
Hope this helps.
If you can translate plain C code to iOS Objective-C, then Apache Lucy (a loose "C" port of Lucene) might be worth a look.

OpenCV and Computer Vision, where do we stand now?

I want to do a project involving Computer Vision. Mostly object detection/identification. After some research, I keep coming back to OpenCV. But all of the tutorials are from 2008 (I guess it was big for a bit then). It doesn't compile in Python on the mac apparently. I'm using the C++ framework right out of Xcode, but none of the tutorials work as they're outdated and the documentation sucks from what I can parse.
Is there a better solution for what I'm doing, and does anyone have any suggestions as to learning how to to use OpenCV?
Thanks
I have had similar problems getting started with OpenCV and from my experience this is actually the biggest hurdle to learning it. Here is what worked for me:
This book: "OpenCV 2 Computer Vision Application Programming Cookbook." It's the most up-to-date book and has examples on how to solve different Computer Vision problems (You can see the table of contents on Amazon with "Look Inside!"). It really helped ease me into OpenCV and get comfortable with how the library works.
Like have others have said, the samples are very helpful. For things that the book skips or covers only briefly you can usually find more detailed examples when looking through the samples. You can also find different ways of solving the same problem between the book and the samples. For example, for finding keypoints/features, the book shows an example using FAST features:
vector<KeyPoint> keypoints;
FastFeatureDetector fast(40);
fast.detect(image, keypoints);
But in the samples you will find a much more flexible way (if you want to have the option of choosing which keypoint detection algorithm to use):
vector<KeyPoint> keypoints;
Ptr<FeatureDetector> featureDetector = FeatureDetector::create("FAST");
featureDetector->detect(image, keypoints);
From my experience things eventually start to click and for more specific questions you start finding up-to-date information on blogs or right here on StackOverflow.
Let me add a couple of things. First, I can assure you that the Python bindings to OpenCV work on a Mac. I use them every day.
Many people like OpenCV for many reasons:
The license is good, friendly to integration into commercial products, etc.
It is quite good from a technical stand point. It gives you a reference implementation of state of the art algorithms.
It tends to be quite fast compared to the alternatives (Matlab I'm looking at you).
Like everything in life, it is not perfect:
It is a good example of a software library that is a moving target.
I have a 300 line python program that uses OpenCV and every few
months when a new version of OpenCV is released I have to change it
to adapt to the new function names/calling conventions, etc. The
library does advance, a lot, however it is a pain to have to change
the same program 3 times per year.
It has a learning curve, like computer vision itself, it is quite
technical and not easy to learn.
There are alternatives (with other pros and cons) MATLAB with the Image Processing Toolbox is one such example.
The simplest answer that comes to mind, is to read the example code with a bit of understanding, and to try out if Your ideas work. The api does change, and most of the tutorials are writen for the first versions of OpenCV, and it looks that nobody bothered to rewrite them. Nevertheless the core ideas behind it are not changing. So if You find a tutorial answering Your questions, but written in old API just look in the documentation for modern replacements of used functions. It’s not easy and quick, but looks like it works. If You use the newest (actually 2.3) version, I suggest using both the 2.1 documntation and 2.3 docs + tutorials . You should also look into the samples, which should have been installed alongside the library. There are lots of hints about how to use certain structures and tricks that weren't mentioned in documentation. Finally, don't be afraid to look inside the code of the library itself (if You compiled it on Your own). Unfortunately, thats the only source I know to check for example what code corresponds to which type of Mat object.

iOS / C: Algorithm to detect phonemes

I am searching for an algorithm to determine whether realtime audio input matches one of 144 given (and comfortably distinct) phoneme-pairs.
Preferably the lowest level that does the job.
I'm developing radical / experimental musical training software for iPhone / iPad.
My musical system comprises 12 consonant phonemes and 12 vowel phonemes, demonstrated here. That makes 144 possible phoneme pairs. The student has to sing the correct phoneme pair 'laa duu bee' etc in response to visual stimulus.
I have done a lot of research into this, it looks like my best bet may be to use one of the iOS Sphinx wrappers ( iPhone App › Add voice recognition? is the best source of information I have found ). However, I can't see how I would adapt such a package, can anyone with experience using one of these technologies give a basic rundown of the steps that would be required?
Would training be necessary by the user? I would have thought not, as it is such an elementary task, compared with full language models of thousands of words and far greater and more subtle phoneme base. However, it would be acceptable (not ideal) to have the user train 12 phoneme pairs: { consonant1+vowel1, consonant2+vowel2, ..., consonant12+vowel12 }. The full 144 would be too burdensome.
Is there a simpler approach? I feel like using a fully featured continuous speech recogniser is using a sledgehammer to crack a nut. It would be far more elegant to use the minimum technology that would solve the problem.
So really I'm hunting for any open source software that recognises phonemes.
PS I need a solution which runs pretty much real-time. so even as they are singing the note, firstly it blinks on to illustrate that it picked up the phoneme pair that was sung, and then it glows to illustrate whether they are singing the correct note pitch
If you are looking for a phone-level open source recogniser, then I would recommend HTK. Very good documentation is available with this tool in the form of the HTK Book. It also contains an entire chapter dedicated to building a phone level real-time speech recogniser. From your problem statement above, it seems to me like you might be able to re-work that example into your own solution. Possible pitfalls:
Since you want to do a phone level recogniser, the data needed to train the phone models would be very high. Also, your training database should be balanced in terms of distribution of the phones.
Building a speaker-independent system would require data from more than one speaker. And lots of that too.
Since this is open-source, you should also check into the licensing info for any additional details about shipping the code. A good alternative would be to use the on-phone recorder and then have the recorded waveform sent over a data channel to a server for the recognition, pretty much something like what google does.
I have a little bit of experience with this type of signal processing, and I would say that this is probably not the type of finite question that can be answered definitively.
One thing worth noting is that although you may restrict the phonemes you are interested in, the possibility space remains the same (i.e. infinite-ish). User training might help the algorithms along a bit, but useful training takes quite a bit of time and it seems you are averse to too much of that.
Using Sphinx is probably a great start on this problem. I haven't gotten very far in the library myself, but my guess is that you'll be working with its source code yourself to get exactly what you want. (Hooray for open source!)
...using a sledgehammer to crack a nut.
I wouldn't label your problem a nut, I'd say it's more like a beast. It may be a different beast than natural language speech recognition, but it is still a beast.
All the best with your problem solving.
Not sure if this would help: check out OpenEars' LanguageModelGenerator. OpenEars uses Sphinx and other libraries.
http://www.hfink.eu/matchbox
This page links to both YouTube video demo and github source.
I'm guessing it would still be a lot of work to mould it into the shape I'm after, but is also definitely does do a lot of the work.

Comprehensive graphing and charting solutions...?

I have need to produce LINE, BAR, and PIE charts in Rails. I have found several that do all these. However the one caveat is that I can never find a solution that does all as well as XY-SCATTER. I've looked at Gruff,Scruffy,Gnuplot, etc. and none of them do "everything". Can anyone recommend a local (i.e. doesn't require network connectivity) library that can accommodate? GoogleCharts isn't an option as some of this will occur on closed networks.
Best.
If you don't mind commercial solutions, take a look at ChartDirector.
Open Flash Chart Plugin for Ruby on Rails - Graphs
FusionCharts is pretty good if you don't have a problem with Flash.
Have you checked Google charts? I'm not a web developer myself but it seems that, although not implemented in Rails, you can call the chart API and display the generated image.
Looking at the available chart types, it can give you all the types you want.
Edit: A search turned up this google-chart-on-rails wrapper.

Resources