How to get a complete description of the available commands for a z-wave device? - home-automation

I bought a Zipato Bulb 2. The datasheet and user's manual are very minimalist with no comprehensive description of the available commands. I looked into the XML description of OpenZWave but it is also incomplete.
So I asked Zipato directly, but the contact I made is not on the technical side and he doesn't know what I am talking about.
How am I supposed to interact with Z-Wave products if I don't know the command they provide?

I'm not sure if you've seen this information, but here's a link to the user manual for the Zipato Bulb 2. It covers the command classes that the bulb uses. It should be the same as the OpenZWave XML file.
The devices don't usually come with much more documentation than that. Is there something you still don't understand after having read it?

You can find the list of command classes in the ZWave Alliance product database
https://products.z-wavealliance.org/products/2712/classes
Openzwave didn't support currently the Command Class Firmware Update Md V2, otherwise, all others CC reported are supported.

I have not found the manufacturers of devices to be a useful source of info. Best thing to do is to ping your controller through its API. The command will vary with difference controllers, but with an ISY994i controller, as an example, I would use the following command:
192.168.X.YYY/rest/nodes
X.YYY is the local address of your controller on the network.
With my controller, this lists all devices and their sub-devices. For example:
AEON LABS SMART ENERGY SWITCH (DSC06106-ZWUS)
<node flag="128" nodeDefId="UZW0019">
<address>ZW004_1</address>
<name>ZW Main Computer</name>
<pnode>ZW004_1</pnode>
<cat>121</cat>
<property id="ST" value="0" formatted="Off" uom="78"/>
<node flag="0" nodeDefId="UZW002B">
<address>ZW004_143</address>
<name>ZW Main Computer Meter</name>
<pnode>ZW004_1</pnode>
<cat>143</cat>
<property id="ST" value="104991" formatted="104.991 Watts" uom="73" prec="3"/>
Again, it's going to be completely different for your controller, but you get the idea. Look to the controller's API, not the device manufacturer, for details on what variables you can set in Z-wave devices. Also, some controllers will support functions that other controllers don't support. Which z-wave chip is in the controller vs the device will also change what you see through the controller API.

Related

using XInput, is it possible to get an image of the controller?

In my simple mind, it seems useful to ship a nice image of your joystick with an index of the button and axis locations on the image. Can such a thing be queried through the XInput or DirectInput APIs? Would it be driver-specific, and if so which drivers support this?
In particular, I want to support Logitech wheels and XBOX 360 controllers. The Logitech Profiler seems to come with this information (or pull it from the driver). Is it accessible in my code as well?
I see the image of the joystick show up in the game controller properties, but I assume that entire property page is reported from the driver?
XInput does not supply this and it is not supposed to, XInput is an interface for accessing the state of the controller and the capabilities of the controller. This is something that you would detect manually in your code, finding the correct device id and I believe this is what the Logitech Profiler does as well (since it is a custom application for Logitech), it simply has the images as resources in the program itself.
What is commonly done is that you contact Microsoft, Sony, Nintendo or Logitech and they will have high quality images of their controllers that are approved for use in games. Specifically for images of the controller in console games there are certain requirements that you need to follow. The same requirements do not follow for games for windows and other operating systems but you should still be able to retrieve this information from Microsoft and Logitech.
By retrieving the resources yourself you can also convert them to a format that suits the interface where you are presenting them.
Since the Windows XInput API was created specifically for the XBox 360 controller, it's possible to make a great deal of assumptions about the controller layout. Even now, after the release of official drivers for the XBOne gamepad, the official API documents go out of their way to explicitly limit discussion to the 360 pads.

iOS Jailbroken devices development: How to dump method calls

I am pretty new to development for iOS devices with jailbreak. From what I am reading I understand that to be able to do all the cool things which you can't do on non-jailbroken phones you have to hook up to a given class and override some of its behaviour. Since there is no documentation how a developer tracks to which class exactly he should hook?
I imagine that for instance if I wanted to have my app respond to a given event such as phone boot, call hang up or user clicking on an icon I would manually generate the given event and see what invocations have been made. Is this the proper way to track where you should hook your code and if yes how is it done.
Note I am not interested in exactly those events mentioned above I am more interested the approach in general.
There are several approaches:
Disassemble binaries
You can disassemble a binary or just dump classes with something like class-dump.
So, you can see the whole hierarhy of classes.
Find dumped classes
Most of major iOS subsystems were dissasembled by somebody already. You can find quite a lot of useful stuff.
As example. Google search "Springboard headers" got this
Dump classes in a runtime.
Look at this question for explanation: List selectors for Objective-C object

AppStore / iOS apps and interpreted code - where do they draw the line?

Apple's iOS developer guidelines state:
3.3.2 — An Application may not itself install or launch other executable code by any means, including without limitation through the use of a plug-in architecture, calling other frameworks, other APIs or otherwise. No interpreted code may be downloaded or used in an Application except for code that is interpreted and run by Apple’s Documented APIs and built-in interpreter(s).
Assuming that downloading data - like XML and images, or a game level description, for example - at run-time is allowed (as is my impression?), I am wondering where they draw the line between "data" and "code". Picture the scenario of an app that delivers interactive "presentations" to users (like a survey, for instance). Presentations are added continuously to the server and different presentations are made available to different users, so they cannot be part of the initial app download (which would be the whole point). They are described in XML format, but being interactive, they might contain conditional branching of this sort (shown in pseudo form to exemplify):
<options id="Gender">
<option value="1">Male</option>
<option value="2">Female</option>
</options>
<branches id="Gender">
<branch value="1">
<image src="Man" />
</branch>
<branch value="2">
<image src="Woman" />
</branch>
</branches>
When this XML is interpreted and "played" within the app, the above would be presented in two steps. First a selection screen is shown, where the user can click on either of the two choices ("Male" or "Female"). Next, an image will be [downloaded dynamically] and displayed based on the choice made in the previous step.
Now, from this, it's easy to imagine additional tags, describing further logic still. For example, a containing tag could be added:
<loop count="3">
<options... />
<branches... />
</loop>
The result here being that the selection screen / image screen pair would be sequentially presented three times over, of course.
Or imagine some format describing a level in a game. It is perhaps natural to view that as passive "data", but if it includes, say, several doorways that the user can go through and with various triggers, traps and points attached to them etc - isn't that the same as using a script (or, indeed, interpreted code) - to describe execution sequences, options and their conditional responses?
Assuming that the interpretation engine for the data is already present in the app and that such "presentations" can only be consumed (not created or edited) in the app, how would this fare against Apple's iOS guidelines? Doesn't XML basically constitute a scripting language in this sense (couldn't any program in an interpreted language be described in XML)?
Would it be OK if the proprietary scripting language (ref the XML used above) was strictly sandboxed (how can they tell?) and not given access to the operating system in any way (but able to download content - like a survey or a game level - dynamically as well as upload results - answers or scores - to the authoring server)?
Where does the line go?
Update as of WWDC 2017
Programming tools such as Codea mentioned below are now explicitly allowed to download code. The App Store Guidelines currently say (emphasis mine):
2.5.2 Apps should be self-contained in their bundles, and may not read or write data outside the designated container area, nor may they download, install, or execute code, including other apps. Apps designed to teach, develop, or test executable code may, in limited circumstances, download code provided that such code is not used for other purposes. Such apps must make the source code provided by the Application completely viewable and editable by the user.
There is also this tweet citing more details on the relaxed clauses.
Original
Does your interpreted download allow the user to write infinite loops or recursion?
Apple allow Javascript because they provide the interpreter and can kill your code. I have a feeling I've read that it's a 10 second limit but I couldn't find it on the site with a few minutes searching. (Yes, my self-imposed timeout for writing an answer kicked in.)
I think you're pretty safe if what you do is declarative and doesn't allow obvious looping in the interpreter.
I would also avoid the use of the word "interpreter" in any descriptions visible to Apple including public discussion. Maybe "parser" would be safer.
Codea have skated along the edge of these definitions with their Lua environment and cannot download code. They had to remove a feature for downloading new packages as ".codea" files.
Based on 3.3.2, they could reject an app for this. However, the scarier thing is that you could create the app, get it approved, have it be downloaded and used by many users, and then Apple could pull the app from the store.
Did you ever publish the app you described?
There's a major difference between the Guidelines and actual practice by the App Review team.
The current Guidelines state:
2.7 Apps that download code in any way or form will be rejected
2.8 Apps that install or launch other executable code will be rejected
So, the old ban on interpreted code is gone, and replaced by a ban on apps that could be considered to be IDEs or self-modifying.
However in practice there are a number of apps which do this, hence the difference between theoria and praxis.
You should take a look at what Apple has enabled in iOS7. It is now allowed to download and run JavaScript within your app.
I think what Apple means is your application should not depend in another module, compiled product or executable in order to work that will be downloaded from a website/server and that compiled add-on was not reviewed by Apple.
Basically when I asked something similar they told me something like: "If your application will download another executable compiled code that such a ftp downloader, key decryption tool or something of this kind that was not approved my Apple. You are available to download data or files (such as XML, HTML, PDF files, images) that does not represents an application.
The concept of the differences between 'code' and 'data' has been discussed on SO before.
Please see this answer: https://stackoverflow.com/a/642476/200696
From Apple's perspective, this ban prevents un-reviewed executable content from the app store. It would be trivial to create a program which is approved by Apple, and then downloads executable content that changes the pre-approved behavior.
All I can tell you is I've released products which use XML to script behavior within the app and Apple has always approved them.

Emulate GPS or a serial device

Is it possible to get location data out of Google Gears, Google Gelocation API or any other web location API (such as Fire Eagle) in such a format that it appears to other software as a GPS device?
It occured to me reading these answers to my question regarding WiFi location finding, on Super User, that if I could emulate a GPS unit, many of these web services could act as a 'poor-mans' GPS to otherwise less useful software that requires it.
Is GPSD an option?
Preferably OSX & Python, but I would be interested in any implementation.
There is a very similar thread on a Python mailinglist that mentions Windows virtual COM ports and discusses Unix's pseudo-tty capabilities. If the app(s) you want to use let you type in a specific tty device file, this may be the easiest route. (Short of asking the authors to provide a plugin API for what you're trying to do, or buying yourself a $20 bluetooth GPS mouse.)
Are you using OS X?
There is a project macosxvirtualserialport on Google code that provides a graphical wrapper around some of the features of a utility called socat. I'd recommend taking a look at socat if you see potential in the pseudo-tty route. I believe you could use socat to link a pipe from a Python program to a pseudo-tty.
Most native Mac apps will be querying IOServiceMatching for a device with kIOSerialBSDRS232Type, and I doubt that a pseudo-tty will show up as an IOKit service.
In this case, unless you can find a project that has already implemented such a thing, you will need to implement a driver as described in this How to create virtual COM port thread. If you're going to the trouble of create a device driver, you would want to base it on IOKit because of that likely IOServiceMatching query. You can find the Apple16X50Serial project mentioned in that post at the top of Apple's open source code list (go to the main page and pick an older OS release if you want to target something pre-10.6).
If your app is most useful with realtime data (e.g. the RouteBuddy app mentioned in the Python mailinglist thread can log current positions) then you will want to fetch updates from your web sources (hopefully they support long-polling) and convert them to basic NMEA RMC sentences. You do not want to do this from inside your driver code. Instead, divide your work up into kernel-land and user-land pieces that can communicate, and put as little of the code as possible into the kernel part.
If you want to let apps both read and write to these web services, your best bet would probably be to simulate a Garmin device. Garmin has more-or-less documented their protocol in the IntfSpec.pdf file included with their Device Interface SDK. Again, you'd want to split as much as you could into user-space code.
I was unable to find a project or utility that implements the kernel side of an IOKit-based virtual serial interface, but I'd be surprised if there wasn't one hiding somewhere out there. Unfortunately, most of the answers I found to that question were like this, with the developer being told to get busy writing a kext.
I'm not exactly sure how to accomplish what you're asking, but I may be able to lend some insight as to how you might begin to get it done. So here goes:
A GPS device shows up to most systems as nothing more than a serial device -- a.k.a. a COM port if you're dealing with Windows, /dev/ttySx if you're in *nix. By definition, a serial port's specific duty is to stream data across a bus, one block at a time. So, it would then follow logically that if you want to emulate the presence of a GPS device, you should gather the data you're consuming and put it into a stream that somehow acts like an active serial port.
There are, however, some complications you might want to consider:
Most GPS devices don't just send out location data; there's also information on satellite locations, fix quality, bearing, and so on. Then again, nobody's made any rules saying you have to make all that data available. There's probably more to this, but I'll admit that I need to do more research in this area myself.
I'm not sure how fast you can receive data when dealing with Google Latitude, etc., but any delays in receiving would definitely result in visible pauses in your "serial port"'s data stream. Again, this may not be as big a complication as it seems, because GPS devices are known to "burst" data across the bus anyway, but I'd definitely keep an eye on that. You want to make sure there's always a surplus of data coming across, not a shortage.
Along the way you'll also have to transform the coordinates you receive into valid GPS sentences, as well. You can find specifications for those, but I would definitely make friends with the NMEA standard -- even though it is a flawed standard, it's the one everyone seems to agree on anyway.
Hope this helped you, at least a little bit. Are there anymore details specific to your problem that you think could be useful in answering this question?
Take a look to Franson GPS Gate which allows you to connect to Google Earth among other things (like simulating GPS and so on). Is windows only though but I think you could get some useful ideas from it.
I haven't looked into it very much, but have you considered using Skyhook's SDK? It might provide you with some of what you are looking for. It's available for every major desktop and mobile OS.

Carmen Robotics

I have been working with Carmen http://carmen.sourceforge.net/ for a while now, and I really like the software but I need to make some changes inside the source code.
I am therefore interesting in some students reports/projects there have been working with Carmen, or any documentation of the source code.
I have been reading the documentation on the webpage for Carmen, but with all respect I think the literature there is a bit outdated and insufficient.
ROS is the new hot navigation toolkit for robotics. It has a professional development group and a very active community. The documentation is okay, but it's the best I've seen for robotic operating systems.
There are a lot of student project teams that are using it.
Check it out at www.ros.org
I'll be more specific on why ROS is awesome...
Built in visualizer/simulator rviz
- It has a record function which will record all of the messages passed out of nodes, this allows you take in a lot of raw data store it in a "ros bag" and then play it back later when you need to test your AI, but want to sit in your bed.
Built in navigation capabilities,
-all you have to do is write the publishers of data for your sensors.
-It has standard messages that you need to fill out so that the stack has enough information.
There is an Extended Kalman Filter which is pretty awesome because I didn't want to write one. Currently implementing it, i'll let you know how that turns out.
It also has built in message levels, by that I mean you can change which severity of print messages are printed during runtime, fairly handy for debugging.
There's a robot monitor node that you can publish the status of your sensors to and it bundles all of that information into a GUI for your viewing pleasure.
There are some basic drivers already written. For example SICK lidars are supported right out of the box.
There is also a built in transform function, to help you move everything to the right coordinate system.
ROS was made to run across multiple computers, but can work on just one.
Data transfer is handled over TCP ports.
I hope that's more helpful.

Resources