Is it possible to implement IR receiver on android-things?
1st idea:
Use GPIO as input and try to buffer changes and then parse the buffer to decode a message.
findings:
GPIO listener mechanism is too slow to observe IR signal.
Another way is to read GPIO infinite loop. But all IR protocols strongly depend on time and java(dalvik) in this case is to less accurate.
2nd idea
Use UART
findings:
It seems to be possible to adjust baud rate to observe all bits of a message but UART API require to setup quantity of start bits etc. and this is a problem because IR protocols do not fit that schema.
IMHO at the moment, UART is the only path but it would be a huge workaround.
The overarching problem (as you've discovered) is that any non-realtime system will have difficulty parsing this input directly because of the timing constraints. This is a job best suited to a microcontroller where you can access a timer interrupt. Grab an inexpensive tinyAVR or PIC to manage the sensor for you.
You will also want to use a dedicated receiver sensor (you might already be doing this) to simplify parsing the signal. These sensors include a demodulator, which means you don't have to deal with 38kHz pulse signal and the input is converted into a more standard PWM wave.
I believe you can't process the IR signal in Java because the reading pulses would be quicker than the reading resolution-at least in a raspberry pi. To get faster gpio readings I'm confident you can do in c++ with ndk with the raspberry. Though it's not officially supported there's some tricks to enable it. See How to do GPIO on Android Things bypassing Java on how to write to gpio in c. From there it should be trivial to read in a tight loop. Though I would still try to hook the trigger from Java since so far I have no clear easy idea on how to write/install interrupts in c.
Related
There are limitations in the ESP SDK libraries (which are not public) like for example the length of the packet recv (112bytes max) when in promisc mode.
I tried reaching them to get some input and directions - but they seem to be replying nonsense.
Is it possible to program the chip without the SDK - thus make my own SDK and forget their limitations?
The processor-core on the esp8266 is an 'xtensa'. The processor-core, or let's just call it the processor, is what we program with C or C++ or assembler. The processor's instruction set is made public by the company (which is Tensilica .. or Cadence??) and once you have the instruction set, you can program directly or make a compiler and have complete freedom with the processor.
The processor-core is not the complete product and for us end-consumers, and companies, like Espressif, buy the Intellectual Property rights to a processor-core's design and build an end-product by putting peripherals like SPI, I2C, UART and in the esp8266's case, the wifi-tranceiver, around the processor-core.
These peripherals are controlled digitally, and output to the processor digitally, so the processor can interface with them - but their action can be either digital or analog. For UART, SPI, I2C etc, espressif has provided us with the datasheet that informs of all the registers that can be used to control that peripheral. It's something like write to this X memory address what you want to transfer and then set the bit Y on the Z memory address to begin the transfer. For SPI for example, there are registers to control speed, polarity, phase etc for a transfer. Once you know how to control a peripheral at the lower level, you can write high level drivers, which espressif does provide too, but you can write your own.
For Wifi, espressif hasn't released how the peripheral can be interfaced with, so we have to depend upon the compiled binaries that espressif sends us. You can use 'objdump -t' on the 'lib/lib80211.a' to get atleast the names of the routines that the Wifi driver provides. You can call these routines from C or assembler code and go a little bit lower than espressif intended but to go any lower would require 'Reverse Engineering' by manually understanding the low level code in the compiled Driver and nobody's gonna take that risk and time-drain.
I need to realize a source-synchronous receiver in a Virtex 6 that receives data and a clock from a high speed ADC.
For the SERDES Module I need two clocks, that are basically the incoming clock, buffered by BUFIO and BUFR (recommended). I hope my picture makes the situation clear.
Clock distribution
My problem is, that I have some IOBs, that cannot be reached by the BUFIO because they are in a different, not adjacent clock region.
A friend recommended using the MMCM and connecting the output to a BUFG, which can reach all IOBs.
Is this a good idea? Can't I connect my LVDS clock buffer directly to a BUFG, without using the MMCM before?
My knowledge about FPGA Architecture and clocking regions is still very limited, so it would be nice if anybody has some good ideas, wise words or has maybe worked out a solution to a similar problem in the past.
It is quite common to use a MMCM for external inputs, if only to cleanup the signal and realize some other nice features (like 90/180/270 degree phase shift for quad-data rate sampling).
With the 7-series they introduced the multi-region clock buffer (BUFMR) that might help you here. Xilinx has published a nice answer record on which clock buffer to use when: 7 Series FPGA Design Assistant - Details on using different clocking buffers
I think your friends suggestion is correct.
Also check this application note for some suggestions: LVDS Source Synchronous 7:1
Serialization and Deserialization Using
Clock Multiplication
I'm trying to understand how gnuradio source blocks work. I know how to make a simple one that outputs a constant and I understand what sample rate means, but I'm not sure how (or where) to combine the two.
Is the source block in charge of regulating the amount of data to output? Or does the amount that it outputs depend upon other blocks in the flow graph and how much they consume? Some source blocks take sample_rate as an input, which makes me think it's the former. But other blocks don't, which makes me think it's the later.
If a source block is in charge of its sample rate, how does it regulate it? Do they check the system clock and output samples based upon that?
Do they check the system clock and output samples based upon that?
Definitely not. All GNU Radio blocks operate at the maximum speed the processor can give.
However, GNU Radio relies on the fact that each flowgraph may have a source and/or sink device (e.g USRP, other SDR device, sound card) that produces/consumes samples in a constant rate. Consequently, the flowgraph is throttled at the rate of the hardware.
In order to avoid CPU saturation, if none of these hardware devices exist, GNU Radio provides the Throttle block that tries (it is not so accurate) to throttle the samples per second at the given rate, by sleeping for suitable amount of time between each sample that passes through the Throttle block.
As far the sample_rate parameter concerns, excluding the Throttle and device specific blocks, it is used only for graphical representation or internal calculations.
I'm using an ARM Cortex M4 and I want to ask if it's possible to unload main routine form communication tasks and let them run in background.
For example I'm using on ARM MCU this peripherals:
ADC
I2C
UART
SPI
When adc_start(ADC); is called, ADC start conversion in background so I don't need to wait until ADC has finished conversion and I can go to the next istruction and later read the ADC result.
I want to ask if it's possible to do the same with communication periphericals. I2C and SPI can be fast, but since this MCU types can reach 50Mhz and more, it's a waste of MCU speed if I need to wait until I2C have finished to trasmit at 400kHz or SPI at 20Mhz or worst with UART. Also, if I perform some tasks and I don't want to interrupt them, I need to be able to unload MCU from any interrupts from peripherals and let them recive packets, buffer them and when I need to read them.
Something like this is possible?
If I've understood the question correctly, you're looking for automatic interrupt based handling of fast communication peripherals such as the I2C and SPI. As far as I know, YES! its achievable, at least on the Texas Instruments TIVA based ARM CORTEX M4 series MCUs. It's quite a nifty little feature to have around when you're working on computationally intensive algorithms and not have the CPU bogged down on waiting for the SPI to finish its task.
For a good reference on programming the CORTEX M4 peripherals, I recommend keeping this book handy:
http://www.amazon.com/TI-ARM-Peripherals-Programming-Interfacing-ebook/dp/B00L9DRAI2
Table 6-7 in chapter 6 of the book details the interrupt vector table on the TM4C123G MCU (the one shipped with the TIVA launchpad). Interrupts 50 and 53 are assignments for the SSI/SPI and I2C peripherals respectively. Process should be fairly straight forward once you unmask the right interrupts.
I have a Beaglebone running Ubuntu. We want to continuously sample from 3 on-board ATD converters at 100KS/s, and every window of samples we will run a cross correlation DSP algorithm. Once we find a correlation value above a threshold, we will send the value to a PC.
My concern is the process scheduling in Ubuntu. If our process gets swapped out and an ATD sample becomes available during this time, the process will miss the sample. We need to ensure that our process will capture every sample and save it in memory.
With this being said, is there a way to trigger interrupts on the Beaglebone so that if an ATD sample is ready, the sample will be saved in the memory of our program even if the program does not have the processor at the time?
Thanks!
You might be able to trigger the EDMA or use the PRUSS. Probably best to ask on beagleboard#googlegroups.com. There isn't a DSP per-se on the BeagleBone.
This is not exactly an answer to your question, but hopefully it explains how the process works. Since you didn't mention what hardware you are running for AD conversion, maybe this is the best that can be done:
With audio hardware, which faces the same problem, the solution comes from the hardware and the drivers working together: whenever the hardware has filled up enough of the buffer it signals the driver (via an interrupt or some similar mechanism). In some cases, it's also possible that the driver polls the hardware or something like that, but that's a less efficient solution, and I'm not sure anyone does it that way anymore (maybe on cheaper hardware?). From there, the driver process may call right into the end-user process, or it may simply mark the relevant end-user process as "runnable". Either way, control needs to be transferred to the end user process.
For that to happen, the end user process must be running at a higher priority than anything else occupying the CPUs at that moment. To guarantee that your process will always be first in the queue, you can run it at a high priority, with the appropriate permissions, you can even run in very high priorities.
The time it takes for the top priority process to go from runnable to running is sometimes called the "latency" of the OS, though I am sure there's a more specific technical term. The latency of Linux is on the order of 1 ms, but since it's not a "hard" real-time OS, this is not a guarantee. If this is too long to handle your chunks of data, you may have to buffer some of it in your driver.