i am currently trying to use more than 1 HC-SR04 on my BeagleBone Black (Rev C.)
I tried the following script:
https://github.com/luigif/hcsr04 And it is also working, but I have no idea, how I am able to change the used PINs and also how to use them in a serial way.
May someone help me please?
best regards
Ingo
One possible solution with the current code is to add two fast enough multiplexer to the echo/trigger pins of the sensors (8:1 or 16:1 depending on how many sensors you want to connect). The first will be to switch between trigger connections and the second to switch between echo connections. To control the mux you'll have connect the select lines of the mux to any of the GPIO pins(easiest are P8_14, P8_15, P8_16 and P8_18 since P8_11 and P8_12 are being used by PRU).
You'll have to change the present code something like this
/* Execute code on PRU */
printf(">> Executing HCSR-04 code\n");
prussdrv_exec_program(0, "hcsr04.bin");
/*Add code here to set GPIO pins high/low to choose the sensor */
/* Get measurements */
mux generally have 5v inputs and outputs, make sure that you step it down to 3V else you'll blow your beaglebone!
basic cheap mux have 35ns max response time which is more that sufficient to match the requirements
https://en.wikipedia.org/wiki/Multiplexer
http://socrates.berkeley.edu/~phylabs/bsc/PDFFiles/DM74151A.pdf
Addition: Tie all the trigger pins together and mux only the echo pins so that you'll need only one mux instead of 2
Related
I want to be able to detect when a peripheral sensor is NOT connected to my Raspberry Pi 3.
For example, if I have a GPIO passive infrared sensor.
I can get all the GPIO ports like this:
PeripheralManagerService manager = new PeripheralManagerService();
List<String> portList = manager.getGpioList();
if (portList.isEmpty()) {
Log.i(TAG, "No GPIO port available on this device.");
} else {
Log.i(TAG, "List of available ports: " + portList);
}
Then I can connect to a port like this:
try {
Gpio pir = new PeripheralManagerService().openGpio("BCM4")
} catch (IOException e) {
// not thrown in the case of an empty pin
}
However even if the pin is empty I can still connect to it (which technically makes sense, as gpio is just binary on or off). There doesn't seem to be any api, and I can't legitimately think of logically how you can differentiate between a pin that has a peripheral sensor connected and one that is "empty".
Therefore at the moment, there is no way for me to assert programmatically that my sensors and circuit is setup correctly.
Any one have any ideas? Is it even possible from a electronics point of view?
Reference docs:
https://developer.android.com/things/sdk/pio/gpio.html
There are lots of ways to do "presence detection" electrically, but nothing that you will find intrinsically in the SoC. You wouldn’t normally ask a GPIO pin if something is attached—it would have no way to tell you that.
Extra GPIO pins are often used to detect if a peripheral is attached to a connector. The plug for some sensor could include a “detect” line that is shorted to ground and pulls the GPIO low when the sensor is attached, for example. USB and SDIO do something similar with some dedicated circuitry in the interface.
You could also build more elaborate detection circuits using things like current sensing, but they would inevitably have to put out a binary signal that you capture through a dedicated GPIO.
This is easier to achieve for serial peripherals, since you can usually send a basic command and verify that you get a response.
Detection using solely the input line can be tough. First, you'd want to narrow the scope of the problem. Treat as not-present the condition of a sensor not being connected, the sensor being connected but not responding, or the sensor responding in an uncharacteristic manner.
So, if it is a digital sensor, then communicating with the sensor may be enough to tell if it is present or not (especially if checksums or parity bits are involved).
Some analog sensors also have specific specs on how it behaves when triggered. You can utilize deviation from those specs to determine if the sensor is not present.
If you have a digital sensor w/o any error checking on it's output, where you clock out data (so all 0s or all 1s is valid) or it's just a binary 1 or 0 for output, then you'd need external help. Same for most analog sensors.
This external help would be something where you put the system in a known controlled state, press a button, and it then checks the sensors for output within a specific range. To be absolutely sure, you'd want at least two different states, to ensure your digital or analog inputs didn't happen to be stuck at the correct state for your test.
Just about any other method would be external to the system. Using additional IO to "detect" a sensor could help increase confidence the sensor is there, but you could get false positives where all you've learned is that "something" is there - not necessarily the sensor you expect.
I'm trying to demodulate a signal using GNU Radio Companion. The signal is FSK (Frequency-shift keying), with mark and space frequencies at 1200 and 2200 Hz, respectively.
The data in the signal text data generated by a device called GeoStamp Audio. The device generates audio of GPS data fed into it in real time, and it can also decode that audio. I have the decoded text version of the audio for reference.
I have set up a flow graph in GNU Radio (see below), and it runs without error, but with all the variations I've tried, I still can't get the data.
The output of the flow graph should be binary (1s and 0s) that I can later convert to normal text, right?
Is it correct to feed in a wav audio file the way I am?
How can I recover the data from the demodulated signal -- am I missing something in my flow graph?
This is a FFT plot of the wav audio file before demodulation:
This is the result of the scope sink after demodulation (maybe looks promising?):
UPDATE (August 2, 2016): I'm still working on this problem (occasionally), and unfortunately still cannot retrieve the data. The result is a promising-looking string of 1's and 0's, but nothing intelligible.
If anyone has suggestions for figuring out the settings on the Polyphase Clock Sync or Clock Recovery MM blocks, or the gain on the Quad Demod block, I would greatly appreciate it.
Here is one version of an updated flow graph based on Marcus's answer (also trying other versions with polyphase clock recovery):
However, I'm still unable to recover data that makes any sense. The result is a long string of 1's and 0's, but not the right ones. I've tried tweaking nearly all the settings in all the blocks. I thought maybe the clock recovery was off, but I've tried a wide range of values with no improvement.
So, at first sight, my approach here would look something like:
What happens here is that we take the input, shift it in frequency domain so that mark and space are at +-500 Hz, and then use quadrature demod.
"Logically", we can then just make a "sign decision". I'll share the configuration of the Xlating FIR here:
Notice that the signal is first shifted so that the center frequency (middle between 2200 and 1200 Hz) ends up at 0Hz, and then filtered by a low pass (gain = 1.0, Stopband starts at 1 kHz, Passband ends at 1 kHz - 400 Hz = 600 Hz). At this point, the actual bandwidth that's still present in the signal is much lower than the sample rate, so you might also just downsample without losses (set decimation to something higher, e.g. 16), but for the sake of analysis, we won't do that.
The time sink should now show better values. Have a look at the edges; they are probably not extremely steep. For clock sync I'd hence recommend to just go and try the polyphase clock recovery instead of Müller & Mueller; chosing about any "somewhat round" pulse shape could work.
For fun and giggles, I clicked together a quick demo demod (GRC here):
which shows:
I am trying to drive down the current consumption of the contiki os running on the CC2538 development kit.
I would like to operate the device from a CR2032 with a run life of 2 years. To achieve this I would need an average current less than 100uA.
However when I run the following at 3V, I get the following results:
contiki/examples/hello-world = 0.4mA - 2mA
contiki/examples/er-rest-example/er-example-client = 27mA
contiki/examples/er-rest-example/er-example-server = 27mA
thingsquare websocket example = 4mA
I have also designed my own target platform based on the cc2538 and get similar results.
I have read the guide at https://github.com/contiki-os/contiki/blob/648d3576a081b84edd33da05a3a973e209835723/platform/cc2538dk/README.md
and have ensured that in the contiki-conf.h file:
- LPM_CONF_ENABLE 1
- LPM_CONF_MAX_PM 2
Can anyone give me some pointers as to how I can get the current down. It would be most appreciated.
Regards,
Shane
How did you measure the current?
You have to be aware that using a basic ampere meter to measure the current consumption of contiki-os wouldn't give you relevant results. The system is turning on/off the radio at a relative high rate (8Hz by default) in order to perform the CCA. This might not be very easy to catch for an ampere meter.
To have an idea of the current consumption when the device is in deep sleep (and then make calculation to determine the averaged current consumption), I'd rather put the device in the PM state before the program reach the infinite while loop. I used the following code to do that:
lpm_enter();
REG(SYS_CTRL_PMCTL) = SYS_CTRL_PMCTL_PM2;
do { asm("wfi"::); } while(0);
leds_on(LEDS_RED); // should not reach here
while(1){
...
On the CC2538, the CCA check consumes about 10-15mA and last approximately 2ms. When the radio transmit a packet, it consume 25mA. Have a look at this post: Contiki UDP packet transmission duration with CC2538.
Furthermore, to save a little more current, turn off the serial com:
#define CC2538_CONF_QUIET 1
Are you using the SmartRF board? If you want to make proper current measurement with this board, you have to remove every jumpers: P486, P487, P411 and P408. Keep only the jumpers of BTN_SEL and the RESET signals.
looking for a little help.
I'm familiar with PIC Microcontrollers but have never used Atmel.
I'm required to use an ATMEGA128 for a project at work so I've been playing around in Atmel Studio 6 the last few days.
I'm having an issue however, I can't even get an LED to blink.
I'm using the STK500 and STK501 Dev boards and the JTAGICE_MKII USB debugger/programmer.
The ATMEGA128 chip is a TQFP package that's in the socket on the STK501 board.
I'm able to program/read the chip no problems, and my code builds without error (except for when I try to use the delay functions used in the delay.h library - but that's another issue).
For now I'm just concerned with getting the IO working. I have a jumper from 2 bits of PORTD connecting to 2 of the LEDs on the STK500 board.
All I'm doing in my code is setting the PORT direction with the DDRx ports and then setting all the PORTD pins to 0. The LEDs remain turned on.
When I'm in debugging mode and I have the watch window open, I can break the code and the watch windows shows me that the PORTD bits are indeed all 0's, but the LEDs remain on.
So far, I hate Atmel. :)
Any ideas?
Thanks
Have you tried setting them to logic 1? It is common for LED circuits to connect the LED to Vcc via a current-limiting resistor, which means the output port has to be 0 to turn on the LED.
If you set it to 1 and the LED goes off, then that'll tell you it's an "active low" signal and you can reverse your logic accordingly.
Have you read the STK500's doc? It is likely, that the LEDs are driven active low.
There are two steps to follow. First you set the "direction" of the pins, because they can be used as input or output. To make the D register pins output pins:
DDRD = 0xFF;
This will set all pins on the D register as output pins. Do this first. Then code like:
PORTD != 0x01;
will set the D0 pin high. And code like
PORTD ^= 0x01;
will toggle the pin.
See this tutorial for a little more info or visit in with this community. The Atmel community is vibrant and helpful.
I have a DirectShow Transform filter written in Delphi 6 using the DSPACK component library. It is a simple audio mixer that creates a new input pin whenever a new connection is attempted. I say simple because once its media format is set, all connections to the its input pins or singular output pin are forced to conform to that media format. I build the filter chain manually, making all pin connections explicitly myself. I do not use any of the "intelligent rendering" calls, unless there is some way to trigger that unwanted behavior (in my case) accidentally.
NOTE: The Capture Filter is a standard DirectShow filter external to my application. My push source audio filter and simple audio mixer filters are being used as private, unregistered filters and are internal to my application.
I am having a weird problem that only occurs when I try to make multiple input connections to my mixer, which does indeed accept them. Currently, I am attempting to connect both a Capture Filter and my custom Push Source audio filter to my mixer filter. Whenever I try to do that the second upstream filter connection fails. Regardless of whether I connect the Capture Filter first or Push Source audio filter first, the second upstream filter connection always fails.
The first test I ran was to try connecting just the Capture Filter to the mixer. That worked fine.
The second test I ran was to try connecting just the Push Source audio filter to the mixer. That worked fine.
But as soon as try to do both I get a "no combination of intermediate filters could be found" error. I did several hours of deep digging into the media negotiation calls hitting my filter from the graph builder and then I found the problem. For some reason, the filter graph is dragging in the ancient "Indeo (R) Audio Software" codec into the chain.
I discovered this because despite the fact that codec did have a media format that matched my filter in almost every regard (major type, sub type, format type, wave format parameters), it had an extra 2 bytes at the end of it's pbFormat data member and that was enough to fail the equals test since that test does a comparison between the source and target pbFormat areas by comparing the cbFormat value of each media type. The Indeo codec has a cbFormat value of 20 while my filter has a cbFormat value of 18, which is the size of a _tWAVEFORMATEX data structure. In a way it's a good thing the Indeo pbFormat has that weird size because the first 18 bytes of its 20 byte area were exactly equal to the pbFormat area of my mixer filter's supported media type. Without that anomaly I never would have known that ancient codec was being drug in. I'm surprised it's being drug in at all since it has known exploits and vulnerabilities. What is most confusing is that this is happening on my mixer filter's output pin, not one of the input pins, and I have not made a single downstream connection yet when building up my pin connections.
Can anyone tell me why DirectShow is trying to drag in that codec despite the fact that the media formats for the both incoming filters, the Capture Filter and the Push Source filter, are identical and don't need any intermediate filters at all since they match my mixer filter's input pins supported format exactly? How can I fix this problem?
Also, I noticed that even in the single filter attachment tests above that succeeded, my mixer output pin was still getting queried for media formats. Why is that when as I said, at this point in building up my pin connections I have not connected anything to the output pin of my mixer filter?
--------------------------- UPDATE: 1 ----------------------------
I have learned that you can avoid the "intelligent connection" behavior entirely by using IFilterGraph.ConnectDirect() instead of IGraphBuilder.Connect(). I switched over to DirectConnect() and turns out that the input pin on my mixer filter is coming back as "already connected". That may be what is causing the graph builder to drag in the Indeo codec filter. Now that I have this new diagnostic information I will correct the problem and update this post with my results.
--------------------------- RESOLUTION ----------------------------
The root problem of all of this was my re-use of the input pin I obtained from the first destination/downstream filter I connected to my simple audio mixer filter, at the top of my application code. In other words my filter was working correctly, but I was not getting a fresh input pin with each upstream filter I tried to connect to it. Once I started doing that the connection process worked fine. I don't know why the code behind the IGraphBuilder.Connect() interface tried to bring in the Indeo codec filter, perhaps something to do with trying to connect to an already connected input pin, but it did. For my needs, I prefer the tight control that IFilterGraph.ConnectDirect() provides since it eliminates any interference from the intelligent connection code in IGraphBuilder, but I could see when video filters get involved it could become useful.