I have a stream of numbers (integers for the sake of discussion) being sampled off an analog input (a a/d converter attached to a potentiomeger). I am curious how I would recognize a pattern in the numbers in realtime.
That is to say, if someone quickly twiddles the pot all the way up and back down, how do I recognize that, vs if turn it only half way. Or what if they turn it up and down three times in a row. How can I convert these actions into distinct "events"? This seems especially tricky to me since the time window over which each of these events will occur will be modestly variable.
I can think of a few quick, hacky ways to do this, but nothing that I am confident in. I am also curious how one would expand this out to multiple different inputs (i.e. input off a spectrograph). Does that change things dramatically? I am not even sure what topic area I should be googling.
If you know what you are looking for, correlate the input signal against a replica of what you expect. Basically, implement a matched filter. If you want to see when the input stream is -127, -63, 0, 63, 127, implement a direct form fir filter with these values as the coefficients. Then look for a maximum on the output. The maximum output of a filter with those coefficients occurs when the data in the filter is -127, -63, 0, 63, 127.
Google "Matched Filter Detection" or or "detection theory" maybe even "Feature detection"
If you don't know exactly what you are looking for, or what you are looking for is variable, it gets more complicated. You would then try to implement a filter that's output would give you information about what is going on. The example that I gave above would show the output spike up when that input sequence occurred. If you then saw that spike occurring with regular frequency, you would guess that the input event was occurring with regular frequency.
if you made your filter 0, 63, 127 63 0, which correlates to turning the knob all the way up, and then back down again, and on your output saw the aforementioned spike occur, but having a lower maximum amplitude and wider time over which the correlation occurs, that might tell you that the know was turned all the way up and then back down, but either slower or faster than the speed for which the filter is design to get a maximum response.
To combat this you might implement 3 of these filters in parallel, one designed for a slow knob turn, one for a medium speed knob turn, and one for a fast knob turn. Then looking at the 3 outputs you get 3 different correlations which better help you understand what is occurring
Did you consider taking the running difference of the signal (its differentiation)?
Related
I am using opencv and openvino and am trying to figure out when I have a face detected, use the cv2.rectangle and have my coordinates sent but only on the first person bounded by the box so it can move the motors because when it sees multiple people it sends multiple coordinates and thus causing the servo and stepper motors to go crazy. Any help would be appreciated. Thank you
Generally, each code would run line by line. You'll need to create a proper function for each scenario so that the data could be handled and processed properly. In short, you'll need to implement error handling and data handling (probably more than these, depending on your software/hardware design). If you are trying to implement multiple threads of executions at the same time, it is better to use multithreading.
Besides, you are using 2 types of motors. Simply taking in all data is inefficient and prone to cause missing data. You'll need to be clear about what servo motor and stepper motor tasks are, the relations between coordinates, who will trigger what, if something fails or some sequence is missing then do task X, etc.
For example, the sequence of Data A should produce Result A but it is halted halfway because Data B went into the buffer and interfered with Result A and at the same time screwed Result B which was anticipated to happen. (This is what happened in your program)
It's good to review and design your whole process by creating a coding flowchart (a diagram that represents an algorithm). It will give you a clear idea of what should happen for each sequence of code. Then, design a proper handler for each situation.
Can you share more insights of your (pseudo-)code, please?
It sounds easy - you trigger a face-detection inference-request and you get a list/vector with all detected faces (the region-of-interest for each detected face) (including false-positive and false-positives, requiring some consistency-checks to filter those).
If you are interested in the first detected face only - then it could be to just process the first returned result from the list/vector.
However, you will see that sometimes the order of results might change, i.e. when 2 faces A and B were detected, in the next run it could still return faces, but B first and then A.
You could add object-tracking on top of face-detection to make sure you always process the same face.
(But even that could fail sometimes)
I am looking for a way to code or find a program that can record the chirps of crickets either live or through a prerecorded audio file (large ~24 hours) for a lab experiment.
I'm not too sure how to approach this as I'm a web developer, but I have experience with JS and python, along with libraries. My initial idea was to use Matplotlib to produce an audio visualizer, and then count each time a certain range of db is reached which matches the db of a cricket chirp, but I have no idea how to approach it.
I have successfully visualized the chirps on a online spectrum analyzer (Spectrum Visualizer of Audio Chirps), and can see it clearly, however I don't know how I can use code to count each "chirp" and record it along with the date and time for each chirp in a table of values / dataset of some sort.
Any guidance or help would be greatly appreciated!
Really naive solution: For every unit of time (column of pixels in the spectrogram), you can calculate the sum of all the values in a column. For example, if all the pixels in a column are black, the sum for that column will be 0, if some of them are colored, the sum will be >0.
Then loop over the sums: if at a certain step you go from 0 to > 0, and then after a while you go back to 0, you've hit a peak.
If the output is too "noisy", you can use a small threshold value instead of 0. Tweak it until results seem OK. This works well when the input background noise is pretty much the same all the way through, but if it's skewed up and down during the whole recording, you need something more complicated (for example, the threshold constantly changes with the average of the last n sums)
Or you can Google "peak detection algorithm" and implement one of them. Or you can Google "peak detection library" in your favourite language and use one of them.
What i'm doing is GPGPU on WebGL and I don't know the access pattern which I'd be talking about applies to general graphics and gaming programs. In our code, frequently, we come across data which needs to be summarized or reduced per output texel. A very simple example is matrix multiplication during which, for every output texel, your return a value which is a dot product of a row of one input and a column of the other input.
This has been the sore point of our performance because of not so much the computation but multiplied data access. So I've been trying to find a pattern of reads or data layouts which would expedite this operation and I have been completely unsuccessful.
I will be describing some assumptions and some schemes below. The sample code for all these are under https://github.com/jeffsaremi/webgl-experiments
Unfortunately due to size I wasn't able to use the 'snippet' feature of StackOverflow. NOTE: All examples write to console not the html page.
Base matmul implementation: Example: [2,3]x[3,4]->[2,4] . This produces in a simplistic form 2 textures of (w:3,h:2) and (w:4,h:3). For each output texel I will be reading along the X axis of the left texture but going along the Y axis of the right texture. (see webgl-matmul.html)
Assuming that GPU accesses data similar to CPU -- that is block by block -- if I read along the width of the texture I should be hitting the cache pretty often.
For this, I'd layout both textures in a way that I'd be doing dot products of corresponding rows (along texture width) only. Example: [2,3]x[4,3]->[2,4] . Note that the data for the right texture is now transposed so that for each output texel I'd be doing a dot product of one row from the left and one row from the right. (see webgl-matmul-shared-alongX.html)
To ensure that the above assumption is indeed working, I created a negative test also. In this test I'd be reading along the Y axis of both left and right textures which should have the worst performance ever. Data is pre-transposed so that the results make sense. Example: [3,2]x[3,4]->[2,4]. (see webgl-matmul-shared-alongY.html).
So I ran these -- and I hope you could do as well to see -- and I found no evidence to support existence or non-existence of such caching behavior. You need to run each example a few times to get consistent results for comparison.
Then I came along this paper http://fileadmin.cs.lth.se/cs/Personal/Michael_Doggett/pubs/doggett12-tc.pdf which in short claims that the GPU caches data in blocks (or tiles as I call them).
Based on this promising lead I created a version of matmul (or dot product) which uses blocks of 2x2 to do its calculation. Prior to using this of course I had to rearrange my inputs into such layout. The cost of that re-arrangement is not included in my comparison. Let's say I could do that once and run my matmul many times after. Even this scheme did not contribute anything to the performance if not taking something away. (see webgl-dotprod-tiled.html).
A this point I am completely out of ideas and any hints would be appreciated.
thanks
I'm trying to demodulate a signal using GNU Radio Companion. The signal is FSK (Frequency-shift keying), with mark and space frequencies at 1200 and 2200 Hz, respectively.
The data in the signal text data generated by a device called GeoStamp Audio. The device generates audio of GPS data fed into it in real time, and it can also decode that audio. I have the decoded text version of the audio for reference.
I have set up a flow graph in GNU Radio (see below), and it runs without error, but with all the variations I've tried, I still can't get the data.
The output of the flow graph should be binary (1s and 0s) that I can later convert to normal text, right?
Is it correct to feed in a wav audio file the way I am?
How can I recover the data from the demodulated signal -- am I missing something in my flow graph?
This is a FFT plot of the wav audio file before demodulation:
This is the result of the scope sink after demodulation (maybe looks promising?):
UPDATE (August 2, 2016): I'm still working on this problem (occasionally), and unfortunately still cannot retrieve the data. The result is a promising-looking string of 1's and 0's, but nothing intelligible.
If anyone has suggestions for figuring out the settings on the Polyphase Clock Sync or Clock Recovery MM blocks, or the gain on the Quad Demod block, I would greatly appreciate it.
Here is one version of an updated flow graph based on Marcus's answer (also trying other versions with polyphase clock recovery):
However, I'm still unable to recover data that makes any sense. The result is a long string of 1's and 0's, but not the right ones. I've tried tweaking nearly all the settings in all the blocks. I thought maybe the clock recovery was off, but I've tried a wide range of values with no improvement.
So, at first sight, my approach here would look something like:
What happens here is that we take the input, shift it in frequency domain so that mark and space are at +-500 Hz, and then use quadrature demod.
"Logically", we can then just make a "sign decision". I'll share the configuration of the Xlating FIR here:
Notice that the signal is first shifted so that the center frequency (middle between 2200 and 1200 Hz) ends up at 0Hz, and then filtered by a low pass (gain = 1.0, Stopband starts at 1 kHz, Passband ends at 1 kHz - 400 Hz = 600 Hz). At this point, the actual bandwidth that's still present in the signal is much lower than the sample rate, so you might also just downsample without losses (set decimation to something higher, e.g. 16), but for the sake of analysis, we won't do that.
The time sink should now show better values. Have a look at the edges; they are probably not extremely steep. For clock sync I'd hence recommend to just go and try the polyphase clock recovery instead of Müller & Mueller; chosing about any "somewhat round" pulse shape could work.
For fun and giggles, I clicked together a quick demo demod (GRC here):
which shows:
I'm pretty much a noob when it comes to this kind of thing, so if you guys could either help me or direct me to a place to learn what I need to know, I would greatly appreciate it.
Basically my problem is that I am using the libpruio library to continuously sample analog values from the board. 2 things are going wrong here.
The first is that whenever the BB is sampling the voltages, the voltage of the wire that is hooked up to the AIN pin goes up. I've observed this through hooking up an oscilloscope to the same wire the pin is sampling. What I see is that whenever the BB starts sampling, the entire signal (just a sound wave from an amplified mic) is shifted up .8-.9 volts. This is also reflected in the values that I get from the BB, which are around 30,000 (when they should be 0). Hooking the pin up to ground gets me 0, which is correct, and hooking it up to 1.8 volts gets me something like 65520, which is also correct. So maybe it has something to do with the signal being weak?
The second issue is that even though I am receiving values at a rate of like 500khz-900khz, the actual rate seems to be around 11khz. What I mean by this is I only get a new value every 88us, and the rest of the values I get are stay the same as the new value until the next 88us passes, when I get a new value. These times correspond to the voltage shift up, which I mentioned in the previous paragraph. So actually what I see on the oscilloscope is that whenever I sample with the BB, there is a saw wave, with the frequency at the 11khz I was mentioning earlier.
In conclusion, whenever the BB samples, it first increases the voltage at the pin by .9volts, takes a sample of that voltage, and the voltage dies down for the next 88us, all the while the BB spits back the sample it took at the beginning of the period. I do not want this. I want it to not affect the voltage significantly, and take new samples every time the code runs.
As for the code I'm using, it's basically a slightly modified version of the IO_Input example in the libpruio library, with the values being stored in an array for later use instead of being printed immediately.
If you guys need any more information, I will gladly post it here, but for now I'm wondering if it is something super obvious that I'm missing.
Hooking the pin up to ground gets me 0, which is correct, and hooking
it up to 1.8 volts gets me something like 65520, which is also
correct. So maybe it has something to do with the signal being weak?
The BBB and libpruio seem to work OK. Check your wiring.
Regarding the sampling rate, the io_input example uses IO mode. If you need accurate timing for the samples use MM mode or RB mode.
Your target isn't very clear, so I cannot give detailed advices. (Some code also would help to understand what you're trying to do.)
BR