How to integrate arduino board with apple home kit, Is it possible or not ? - ios

I want to make my own ios app with homekit, which should control arduino, i have studied about homekit and i have doubt that whether it is possible to integrate arduino or raspberry PI with home kit or not ? any useful links ?

I'm not sure about the arduino, the crypto behind the protocol is fairly complex and I'm not sure the processor could handle it well. I can find any sample projects out there either, but the Rasbery pi is another story. Since the Pi can run node, there is a node implementation of hap on github: https://github.com/KhaosT/HAP-NodeJS
I haven't used it, but it's fairly well documented. I doubt there are any tutorials available, I think these are fairly fringe projects at the moment, so you're going to have to get your hands dirty. Good luck.

Related

Best way to broadcast information to Wi-Fi enabled devices?

I live in a building where the laundry machine is a bit far away from my suite. There is only one machine there so only one person can use it at a time. Quite often I take my laundry there just to realize that the machine is in use so I have to go back defeated and try again later. I want to build a doodad that can detect when the machine is in use and broadcast that information throughout the building so that I know that the machine will be available before I go there.
This question is not about how to build the detector. I am planning on using a raspberry pi somehow. This question is about what do I do when I detect that the machine is in use. How do I broadcast that information mainly to myself or potentially to anyone in the building?
I need a cheap solution (< $100). It has to be wireless. The signal has to travel approximately 30 meters in one direction. The broadcast should be readable on any laptop or cell phone (not a requirement but nice to have).
I was considering making an android app using WiFi direct. I'm sure others in the building would be interested in this app and I can use them as peers to extend the range of the broadcast. But I don't like this solution because I know that wifi direct doesn't work with iOS devices. Also this solution will require others to side-load an apk which they might not know how to do.
Please let me know if you can come up with something a bit less reliant on p2p and more platform independent.

Programming a drone to flight indoor using opencv

I am newbie with drones. I would like to develop a program to manage a drone using opencv to fly indoor over a line.
I am searching a lot of languages but most all of them are GPS based. I saw there is an alternative which calls SLAM to detect the position using the sensors.
Well I have a line in the floor and a camera on my drone. I like mission planner but I am not quite sure if it is the best choice. I will be using Parrot AR, but I would like to use any drone.
So I would like to use mission planner but I am not sure if it is the best choice.
What would be the best SDK you would recommend me to use in order to manage the drone not using the GPS points but relative locations or SLAM?
Well, you have the Parrot API ,and a couple of wrappers in different languages. Node-AreDrone for nodeJs, PyArdrone for python, and there is a wrapper coded in C# which I have used AR.Drone. It has a good user interface which you can see the both cameras, record and replay the videos, control the drone by clicking on buttons, you can see the metrics and configuration of the drone and you have also a way to send commands in a queue. Because I love c# and the features I've mentioned you have already in a user interface, I prefer this. Most of them are quite the same as they use the Parrot API inside by sending udp messages. I couldn't try others, so, there are a lot, and anybody could tell me which one is the best. For mission planner I couldn't find a good solution for indoors. So, for anyone who is lost and do not know here to start as I was. I recommend to select the language you want and search for the corresponding wrapper. If you like c# as me, so AR.Drone is a good choice.
Also if you want to do something with OpenCV. Copterface is a good example. You could implement it in any language with OpenCV.

Possible use case/real application for mobile distributed version of Tensorflow?

I'm developing this project where I'm trying to create a distributed version of Tensorflow (the actual open source version is single node) and where the cluster is entirely composed by mobile devices (e.g. smartphones).
In your opinion, what is a possible application or use case where this could be useful? Can you give me some example please?
I know that this is not a "standard" Stack Overflow question, but I didn't know where to post it (if you know a better place where to post it, please let me know it). Thanks so much for your help!
http://www.google.com.hk/search?q=teonsoflow+android
TensorFlow can be used for image identification and there is an example using the camera for Android.
There could be many distributed uses for this. Face recognition, 3D space construction from 2D images.
TensorFlow can be used for a chat bot. I am working towards using it for a personal assistant. The Ai on one phone could communicate with the Ai on other phones.
It could use vision and GPS to 'reserve' a lane for you on the road. Intelligent crowd planned roads and intersections would be safer.
I am also interested in using it for distributed mobile. Please contact me with my user name at gmail or Skype.
https://boinc.berkeley.edu
I think all my answers could run on individual phones with communication between them. If you want them to act like a cluster as #Yaroslav pointed out there is Seti#home and other projects running in the BOINC client.
TensorFlow could be combined with a game engine. You could have a proceduraly generated Ai learning augumented reality game generating the story as multiple players interact with it. I have seen research papers for each of these components.

bluetooth communication in nxj

I'm nxj beginner.
I have some questions about bluetooth communication between PC and brick.
First, when bluetooth communication occurs, where is the birthplace processing this datas?
In other words, I want to know whether these datas will be processed on CPU or brick.
Second, what is exact roles CPU and brick in bluethooth communication?
That means what is processed on CPU and what is processed on brick.
I have searched almost web site but I can't find this anywhere.
Please help me. Thanks.
You can see it in the package structure.
lejos.nxt.*
This package contains classes running on the NXT-brick. All code in this package will be compiled for the brick and will run on the brick.
lejos.pc.*
Here the difference is not that clear. This is java-code you compile for personal computer. So most code runs on your computer. But some classes (e.g: RemoteMotorController) only send messages to the NXT-brick which gives commands to the motors.
lejos.pc.comm provides API's that allow you to communicate/control the nxt robot from the PC.
When importing the the libs to an Android project, it allows you to build an instance of the same environment used on a pc, but within android.
I agree it can be tough finding some things out. It would be great if there was as stronger lejos presence on SO
This question is months old and has remained un-answered I actually have a lot of questions about it myself, but I might be able to provide some insight for utter novices.
when using bluetooth with Android and NXJ robots, you use either lejos.pc.comm or lejos.NXJ.
Both provide APi's to do almost the same thing, but work a little differently. I don't know nearly enough about the NXJ api, but I do know that it is the one that lets you manipulate the robot much more effectively, such as outputting data to it's LCD screen, which you can't do with the pc.comm api
As far as I can tell, the pc.comm API uses both Android Bluetooth API's and it's own protocols to allow communication with Lego LCP commands.
(I want to come back to this, but I'm writing a dissert on the topic so I'll try to update it in a couple of days. Seems not many are interested though, shame)

Getting started - creating an iPhone app that controls another (non-iOS) device via bluetooth commands

All,
Apologies in advance - this question might be too open-ended for SO.
Anyway... A friend of mine (an engineer and entrepreneur) is in the process of building a high-tech piece of lab equipment. He's asked me about the feasibility of building an iPhone/iPad/iPod application that would allow users to control the device via Bluetooth, so I'm helping him gather some information. I'm hoping to get a few pointers on how to get started. Specifically:
Would this require a native app, or could this be accomplished with HTML5 (with or without something like PhoneGap?)
Can you point me to a good primer on bluetooth networking? Everything I've found assumed a VERY high level of pre-existing knowledge.
What are the basics on how something like this is accomplished? Is there a single, established protocol for how one device "controls" another, or is bluetooth more like SSL - just a pipe that allows you to convey any type of message?
I realize this question is incredibly broad and detailed - so I'm not really looking for specifics. But obvious Google searches don't turn up much, and I'm otherwise having a hard time finding a good starting point.
Thanks in advance.
You can communicate via bluetooth in two ways: One is using the Low Energy Bluetooth capabilities of iOS 5 and newer iPhone/ipads.
https://developer.apple.com/library/ios/#documentation/CoreBluetooth/Reference/CoreBluetooth_Framework/_index.html#//apple_ref/doc/uid/TP40011295
Unfortunately the documentation is sparse and will require some hacking away. If you choose this route I would consider starting here and learning as much as you can about how the protocols work before hacking into the framework:
http://developer.bluetooth.org/gatt/services/Pages/ServicesHome.aspx
The limitations of this route are that it might not be best for sending a lot of data. I have only built stuff that sent simple commands which it does work great for.
The other option is the external accessory framework. This will require you to get an mfi license from apple (not fun). You will also need to pay royalties. But it will do what you want. You won't need to concern yourself much with underlying protocols if you use this, the framework provides a friendly api for processing streams.
http://developer.apple.com/library/ios/#documentation/ExternalAccessory/Reference/ExternalAccessoryFrameworkReference/_index.html

Resources