How can i tell if a peripheral is connected to GPIO? - android-things

I want to be able to detect when a peripheral sensor is NOT connected to my Raspberry Pi 3.
For example, if I have a GPIO passive infrared sensor.
I can get all the GPIO ports like this:
PeripheralManagerService manager = new PeripheralManagerService();
List<String> portList = manager.getGpioList();
if (portList.isEmpty()) {
Log.i(TAG, "No GPIO port available on this device.");
} else {
Log.i(TAG, "List of available ports: " + portList);
}
Then I can connect to a port like this:
try {
Gpio pir = new PeripheralManagerService().openGpio("BCM4")
} catch (IOException e) {
// not thrown in the case of an empty pin
}
However even if the pin is empty I can still connect to it (which technically makes sense, as gpio is just binary on or off). There doesn't seem to be any api, and I can't legitimately think of logically how you can differentiate between a pin that has a peripheral sensor connected and one that is "empty".
Therefore at the moment, there is no way for me to assert programmatically that my sensors and circuit is setup correctly.
Any one have any ideas? Is it even possible from a electronics point of view?
Reference docs:
https://developer.android.com/things/sdk/pio/gpio.html

There are lots of ways to do "presence detection" electrically, but nothing that you will find intrinsically in the SoC. You wouldn’t normally ask a GPIO pin if something is attached—it would have no way to tell you that.
Extra GPIO pins are often used to detect if a peripheral is attached to a connector. The plug for some sensor could include a “detect” line that is shorted to ground and pulls the GPIO low when the sensor is attached, for example. USB and SDIO do something similar with some dedicated circuitry in the interface.
You could also build more elaborate detection circuits using things like current sensing, but they would inevitably have to put out a binary signal that you capture through a dedicated GPIO.
This is easier to achieve for serial peripherals, since you can usually send a basic command and verify that you get a response.

Detection using solely the input line can be tough. First, you'd want to narrow the scope of the problem. Treat as not-present the condition of a sensor not being connected, the sensor being connected but not responding, or the sensor responding in an uncharacteristic manner.
So, if it is a digital sensor, then communicating with the sensor may be enough to tell if it is present or not (especially if checksums or parity bits are involved).
Some analog sensors also have specific specs on how it behaves when triggered. You can utilize deviation from those specs to determine if the sensor is not present.
If you have a digital sensor w/o any error checking on it's output, where you clock out data (so all 0s or all 1s is valid) or it's just a binary 1 or 0 for output, then you'd need external help. Same for most analog sensors.
This external help would be something where you put the system in a known controlled state, press a button, and it then checks the sensors for output within a specific range. To be absolutely sure, you'd want at least two different states, to ensure your digital or analog inputs didn't happen to be stuck at the correct state for your test.
Just about any other method would be external to the system. Using additional IO to "detect" a sensor could help increase confidence the sensor is there, but you could get false positives where all you've learned is that "something" is there - not necessarily the sensor you expect.

Related

Updating the drake system states from robot hardware pose during initialization

I have been trying to set up a custom manipulation station with Kuka IIWA hardware in drake. I got the hardware interface working. When running a joint teleoperation code (adapted from drake/examples/manipulation_station/joint_teleop.py), the robot jerks violently (all joints tries to move to 0 position) at first and then continues to operate normally. On digging deeper, I found that this is caused by the FirstOrderLowPassFilter system. While advancing the simulation a tiny bit (simulator.AdvanceTo(1e-6)) to evaluate the LCM messages to set the initial GUI sliders-filter_initial_output_value-plant joint positions etc., to match the hardware, the FirstOrderLowPassFilter outputs a momentary value of 0. This sets the IIWA_COMMAND position to zero for an instance and causes a jerk.
How can I avoid this behavior?.
As a workaround, I am subscribing separately to the raw LCM message from the hardware, before initializing the drake systems and sets the filter_initial_output_value before advancing the simulation. Is this the recommended way?.
I think what you're doing (manually reading the LCM message) is fine.
In the alternative, look how a DiscreteDerivative offers the suppress_initial_transient = true option. Perhaps we could add a similar option (via unrestricted update event) to FirstOrderLowPassFilter so that the initial output value was sampled from the input at t == 0. But the event sequencing of startup may still be difficult. We essentially need to initialize the systems in their dataflow order, including refreshing output ports as events fire, which is not natively supported.
In another alternative, perhaps we could configure the IIWA_COMMAND publisher to not publish at t == 0, instead publishing only t >= 0.005.
FirstOrderLowPassFilter has a method to set the initial value. https://drake.mit.edu/doxygen_cxx/classdrake_1_1systems_1_1_first_order_low_pass_filter.html#aaef7539cfbf1acfa0cf487c371bc5360
It is used in the example that you copied from:
https://github.com/RobotLocomotion/drake/blob/master/examples/manipulation_station/joint_teleop.py#L146

Writing BLE to Cycling Control Point - Adding Resistance

I have bee working with BLE for a while now, but primarily for reading and notifying characteristics.
The devices specifically are Virtual cycle trainers that support GATTS Cycling Power Service - 0x1818 link
I know that it's possible to increase resistance on this trainer, but I have read the documentation on Cycling Power Control Point - 0x2A66 [link][2] which is the only one with Mandatory write functions, but non of the documentation seem to be make sense.
Trainer: Cycleops Magnus
Reading and writing characteristic
// Reads all characteristics
var characteristics = service.characteristics;
for(BluetoothCharacteristic c in characteristics) {
List<int> value = await device.readCharacteristic(c);
print(value);
}
// Writes to a characteristic
await device.writeCharacteristic(c, [0x12, 0x34])
Reading and writing descriptors
// Reads all descriptors
var descriptors = characteristic.descriptors;
for(BluetoothDescriptor d in descriptors) {
List<int> value = await device.readDescriptor(d);
print(value);
}
// Writes to a descriptor
await device.writeDescriptor(d, [0x12, 0x34])
The closest I can see is setting the crank length, or chain weight but at this stage I am only guessing and am looking for some guidance.
The questions is this..
What characteristic or descriptor should I use to adjust Virtual Power
trainer resistance and what is the best way to do this?
Any coding Language is fine, I can transpose it.
Screenshot of services available for device
[2]: https://www.bluetooth.com/specifications/gatt/viewer?attributeXmlFile=org.bluetooth.characteristic.cycling_power_control_point.xml
I think you're using the wrong Bluetooth service for this. The Cycling Power Service is for collecting data from cycling power meters like this one: https://www.cyclist.co.uk/reviews/6705/long-term-review-fsa-powerbox-carbon-power-cranks
For your requirements, I believe you should be using the Fitness Machine Service (0x1826) which includes the Indoor Bike Data characteristic (0x2AD2) and most importantly for you, the Fitness Machine Control Point characteristic. Take a look at section 4.16.1 of the Fitness Machine Service specification and you'll see details of operations which the control point supports, including a reference to 4.16.2.5 Set Target Resistance Level Procedure. I think this is what you need.
You cannot use cycling control point(CPP) for adding resistance. CPP can only be used to copy data like wheel Revolution from old peripheral to new one or if you want to reset data on peripheral you can use cpp.
If you want to add resistance you need to check for fitness machine i am using elite and elite have Fitness Machine Control point you can write resistance and other things like inclination, elevation etc using FTCP.
Few of the vendor support fitness machine and other have given their api or source code you can use that to add resistance and other stuff like that.
Indoor trainers have a few services:
Cycling Power Service (ANT+ or BT also have)
ANT+FEC (ANT only)
BTLE Fitness control (FTMS)
TACX ANT+ FEC. over Bluetooth (https://blog.lazerwalker.com/2019/02/15/bike-game-part-2.html)
Wahoo's Extension to the Cycling Power Service (to be able to set Target power for instance)
To Add Resistance to the trainer to #1, you need to check if it also has the #5 service as well. (this is the UUID used - A026E005-0A7D-4AB3-97FA-F1500F9FEB8B)
#4 is actually a protocol which Tacx came up w/ before FTMS was a standard and some trainers still use this.

Beaglebone Black multiple HC-SR04 - Sensor (Ultrasonic)

i am currently trying to use more than 1 HC-SR04 on my BeagleBone Black (Rev C.)
I tried the following script:
https://github.com/luigif/hcsr04 And it is also working, but I have no idea, how I am able to change the used PINs and also how to use them in a serial way.
May someone help me please?
best regards
Ingo
One possible solution with the current code is to add two fast enough multiplexer to the echo/trigger pins of the sensors (8:1 or 16:1 depending on how many sensors you want to connect). The first will be to switch between trigger connections and the second to switch between echo connections. To control the mux you'll have connect the select lines of the mux to any of the GPIO pins(easiest are P8_14, P8_15, P8_16 and P8_18 since P8_11 and P8_12 are being used by PRU).
You'll have to change the present code something like this
/* Execute code on PRU */
printf(">> Executing HCSR-04 code\n");
prussdrv_exec_program(0, "hcsr04.bin");
/*Add code here to set GPIO pins high/low to choose the sensor */
/* Get measurements */
mux generally have 5v inputs and outputs, make sure that you step it down to 3V else you'll blow your beaglebone!
basic cheap mux have 35ns max response time which is more that sufficient to match the requirements
https://en.wikipedia.org/wiki/Multiplexer
http://socrates.berkeley.edu/~phylabs/bsc/PDFFiles/DM74151A.pdf
Addition: Tie all the trigger pins together and mux only the echo pins so that you'll need only one mux instead of 2

Contiki OS CC2538: Reducing current / power consumption

I am trying to drive down the current consumption of the contiki os running on the CC2538 development kit.
I would like to operate the device from a CR2032 with a run life of 2 years. To achieve this I would need an average current less than 100uA.
However when I run the following at 3V, I get the following results:
contiki/examples/hello-world = 0.4mA - 2mA
contiki/examples/er-rest-example/er-example-client = 27mA
contiki/examples/er-rest-example/er-example-server = 27mA
thingsquare websocket example = 4mA
I have also designed my own target platform based on the cc2538 and get similar results.
I have read the guide at https://github.com/contiki-os/contiki/blob/648d3576a081b84edd33da05a3a973e209835723/platform/cc2538dk/README.md
and have ensured that in the contiki-conf.h file:
- LPM_CONF_ENABLE 1
- LPM_CONF_MAX_PM 2
Can anyone give me some pointers as to how I can get the current down. It would be most appreciated.
Regards,
Shane
How did you measure the current?
You have to be aware that using a basic ampere meter to measure the current consumption of contiki-os wouldn't give you relevant results. The system is turning on/off the radio at a relative high rate (8Hz by default) in order to perform the CCA. This might not be very easy to catch for an ampere meter.
To have an idea of the current consumption when the device is in deep sleep (and then make calculation to determine the averaged current consumption), I'd rather put the device in the PM state before the program reach the infinite while loop. I used the following code to do that:
lpm_enter();
REG(SYS_CTRL_PMCTL) = SYS_CTRL_PMCTL_PM2;
do { asm("wfi"::); } while(0);
leds_on(LEDS_RED); // should not reach here
while(1){
...
On the CC2538, the CCA check consumes about 10-15mA and last approximately 2ms. When the radio transmit a packet, it consume 25mA. Have a look at this post: Contiki UDP packet transmission duration with CC2538.
Furthermore, to save a little more current, turn off the serial com:
#define CC2538_CONF_QUIET 1
Are you using the SmartRF board? If you want to make proper current measurement with this board, you have to remove every jumpers: P486, P487, P411 and P408. Keep only the jumpers of BTN_SEL and the RESET signals.

Why is DirectShow dragging in unnecessary intermediate filters when making multiple input connections to my DirectShow Transform filter?

I have a DirectShow Transform filter written in Delphi 6 using the DSPACK component library. It is a simple audio mixer that creates a new input pin whenever a new connection is attempted. I say simple because once its media format is set, all connections to the its input pins or singular output pin are forced to conform to that media format. I build the filter chain manually, making all pin connections explicitly myself. I do not use any of the "intelligent rendering" calls, unless there is some way to trigger that unwanted behavior (in my case) accidentally.
NOTE: The Capture Filter is a standard DirectShow filter external to my application. My push source audio filter and simple audio mixer filters are being used as private, unregistered filters and are internal to my application.
I am having a weird problem that only occurs when I try to make multiple input connections to my mixer, which does indeed accept them. Currently, I am attempting to connect both a Capture Filter and my custom Push Source audio filter to my mixer filter. Whenever I try to do that the second upstream filter connection fails. Regardless of whether I connect the Capture Filter first or Push Source audio filter first, the second upstream filter connection always fails.
The first test I ran was to try connecting just the Capture Filter to the mixer. That worked fine.
The second test I ran was to try connecting just the Push Source audio filter to the mixer. That worked fine.
But as soon as try to do both I get a "no combination of intermediate filters could be found" error. I did several hours of deep digging into the media negotiation calls hitting my filter from the graph builder and then I found the problem. For some reason, the filter graph is dragging in the ancient "Indeo (R) Audio Software" codec into the chain.
I discovered this because despite the fact that codec did have a media format that matched my filter in almost every regard (major type, sub type, format type, wave format parameters), it had an extra 2 bytes at the end of it's pbFormat data member and that was enough to fail the equals test since that test does a comparison between the source and target pbFormat areas by comparing the cbFormat value of each media type. The Indeo codec has a cbFormat value of 20 while my filter has a cbFormat value of 18, which is the size of a _tWAVEFORMATEX data structure. In a way it's a good thing the Indeo pbFormat has that weird size because the first 18 bytes of its 20 byte area were exactly equal to the pbFormat area of my mixer filter's supported media type. Without that anomaly I never would have known that ancient codec was being drug in. I'm surprised it's being drug in at all since it has known exploits and vulnerabilities. What is most confusing is that this is happening on my mixer filter's output pin, not one of the input pins, and I have not made a single downstream connection yet when building up my pin connections.
Can anyone tell me why DirectShow is trying to drag in that codec despite the fact that the media formats for the both incoming filters, the Capture Filter and the Push Source filter, are identical and don't need any intermediate filters at all since they match my mixer filter's input pins supported format exactly? How can I fix this problem?
Also, I noticed that even in the single filter attachment tests above that succeeded, my mixer output pin was still getting queried for media formats. Why is that when as I said, at this point in building up my pin connections I have not connected anything to the output pin of my mixer filter?
--------------------------- UPDATE: 1 ----------------------------
I have learned that you can avoid the "intelligent connection" behavior entirely by using IFilterGraph.ConnectDirect() instead of IGraphBuilder.Connect(). I switched over to DirectConnect() and turns out that the input pin on my mixer filter is coming back as "already connected". That may be what is causing the graph builder to drag in the Indeo codec filter. Now that I have this new diagnostic information I will correct the problem and update this post with my results.
--------------------------- RESOLUTION ----------------------------
The root problem of all of this was my re-use of the input pin I obtained from the first destination/downstream filter I connected to my simple audio mixer filter, at the top of my application code. In other words my filter was working correctly, but I was not getting a fresh input pin with each upstream filter I tried to connect to it. Once I started doing that the connection process worked fine. I don't know why the code behind the IGraphBuilder.Connect() interface tried to bring in the Indeo codec filter, perhaps something to do with trying to connect to an already connected input pin, but it did. For my needs, I prefer the tight control that IFilterGraph.ConnectDirect() provides since it eliminates any interference from the intelligent connection code in IGraphBuilder, but I could see when video filters get involved it could become useful.

Resources