I have been handed many GC Dev boards and SOMs at work but there seems to be no external way to tell them apart from looking at part/model numbers. My coworkers are disorganized and have mixed three different orders together and we cannot tell which is which anymore. I could turn them all on and look in the terminal but certainly there must be an easier way. I dont know why there is no label or discerning factor on the packages or SOM itself. The Datasheet says this only: These numbers don't correlate to any numbers on the actual SOM Board I have a model number that's the same for all of them, "AA1" but at the top is "09JF001TK" which contains the date it was manufactured but I cannot decipher what the other letters/numbers mean. I sent in a support request but have not heard back, hope you can help. Both QR codes don't seem to yield any results
Related
Apologies if the title isn't very clear.
What I am trying to do is get a google sheet to automatically calculate how many lengths of a material I will need to cover an area, hopefully to include a mix if needed. There are three different lengths of material that never change, but the total area I need to cover changes on a case by case basis. It is only a straight line so there is no need to worry about width or height.
The data breaks down as follows:
Pre-set lengths to choose from
10'6"
12'6"
14'6"
Length of area I need to cover only comes in inches (ie. 68 1/2"; 70"; 59")
The only thing I have been successful in doing is getting the length I need to cover and then manually picking out how many pieces of each length I need, but I cannot think of any way for me to have a formula or script optimize how many of each piece I need. I can understand formulas well enough, but once trying to script anything comes into play I start getting lost. I believe this issue may be beyond the capabilities of formulas.
This is an interesting problem - I don't have the 'reputation' required to comment, but to be clear: you're actually trying to find the 'best fit' of the available lengths to cover the required length?
If that's the case then yes, you're not going to get there without scripting. Fortunately, there are other folks who have this problem and have solved it... you could look at this online cut-list calculator for an example. I think that one even includes an embeddable script for your sheets.
If you're looking to solve the problem yourself because it's interesting, googling 'optimal cut list' or the like will turn up references. Usually you're optimizing on two variables (e.g. 'fewest joins' and 'least waste'), which tips you over into the world of linear programming (only just...) if you want to go there. If it were me, I'd just dig up a few example scripts and map how they operate back to a theoretical description (e.g. this wiki article.)
Rather new to Bluetooth Low-energy devices, and having recently purchased a bunch of trackers off Amazon, decided to write a little application to see what type of information I can get from these.
The trackers are from a Chinese company, and they don't have a ton of information around advertisement information, so I'm playing by best guess here.
What I've been able to achieve so far, through Flutter Reactive BLE, is to find the devices by their ID (filter out additional noise I don't care about) and pull information like RSSI, Name and ID from it.
Now I want to interpret the manufacturerData object, screenshot attached of just one of them, and can't seem to get anything concrete from it.
I half assumed that reactive_ble would've stripped the leading checks and only supplied the the necessary portions of the data object that's relevant to interpret, however, this does not seem to be the case.
My first feel was to just convert this UInt8List to String utf8.decode(device.manufacturerData), however, this returns either a 1x spaced string or nothing at all.
I've tried using ByteData with a start of 3 and end of 4, and that's not very helpful either.
Is there something I'm missing in it's interpretation? I've read the Bluetooth spec and as I don't come from a CompSci background, is rather foreign to me, so would appreciate a layman response.
The first 16 bits (little endian) in manufacturer data contain the manufacturer id (Bluetooth SIG's web site has a list). The layout of the rest of the bytes are totally up to the manufacturer. If you can't guess what they mean, you'll have to ask the manufacturer.
I am writing because I found no public documentation or code to solve this doubt. I have been using the AlphaVantage APIs for a project about stock markets prediction with Machine Learning. I have been using a lot of technical indicators of the AlphaVantage library, and, many of them use sequences (windows) of data points, rolling them (e.g. Moving Averages).
However, many financial libraries tend to update the values they previously computed for some of these indicators, by using windows retaining future information with respect to the point in time the indicator is referred to. Obviously, that would represent an "hidden" information that a predictive system (only relying either on past or present information), like mine, should not have access to.
Hence, I was wondering if it is the same case for the AlphaVantage library. I personally manually checked a lot of indicators referred to the same stock (and I repeated the process for many stocks), at a distance of days, and I did not find any inconsistencies on the values referred to the common dates (the only difference is that the most recent versions of those technical indicators have new points, referred to the new evolutions of the price in time).
I would be very pleased, if anybody of you could help me in solving this.
Most indicators will use a look back window of quote values, including current price, to calculate current indicator values. Many will also include previously calculated indicator values as a basis for current indicator values. Fewer even recalculate older indicator values based on new price information.
For this last scenario, in looking at the AlphaVantage library, I don’t see any in there that would recalculate older indicator values based on newer data. If you’re seeing indicator values change, it’s probably due to a revision or updates of their underlying quote history.
I have a rather large .NET library of indicators, so I’m familiar with which kinds behave that way, due to the mathematics.
Some examples of indicators with retroactive recalculation are ZigZag and Williams Fractal. The reason they do this is because they find local high and low points, which can’t be verified without several confirming bars of data. In other words, you cannot indicate a high point until several lower bars occur thereafter.
I have a slightly unusual profanity-related question.
Now we're used to dealing with profanity-filtering of user-generated content — any method is imperfect, but products like CleanSpeak and WebPurify do a good-enough job.
The problem we have at the moment, though, is that we've been building an engine to run promotional-code–based competitions, that will be used internationally. We could do with checking that none of these codes is profane in Latin American Spanish or Malay (at least in the first instance), to make sure we don't send out a code that's equivalent to FUCK23 or PEN15 or something.
We've tried Googling around and asking people we know, but we can't find an easy way of getting hold of an es-419 or an ms profanity list to filter the codes against. As there are literally millions of codes per locale, we'd rather do an offline check than hit an API for each code (which would be expensive both in terms of bandwidth and usage fees).
I know this is a bit of a long shot, but does anyone know of a good source for profanity lists in different languages?
#disclaim: We know that no profanity filtering is perfect, that it's essentially futile with user-generated content and we have read SO #273516: How do you implement a good profanity filter? — that's not what we're asking.
Building or finding lists in other languages is extremely time consuming and difficult (trust me, we've built many of them at Inversoft). You might be better off tweaking the code generators instead (from what I could tell your code is generating the promotional codes rather than humans).
The best way to tweak a generator is to ensure that the codes can't easily form words based on the general use of consonants and vowels in most European languages. Things get a bit dicey in Polish and others, but it usually works.
Generally, most codes that start with a vowel are followed by another vowel or a non-joining consonant (like 'q' without a 'u'). If the code starts with a consonant then the next character is the same consonant or one that has a low probability of being used. For example, if you start with 's' then adding 'g' is a good choice.
You could also use wiktionary or other similar sources (like Linux dictionary files) to build a statistical approach to this. By extracting the probability of characters being next to each other, you should be able to generate codes with good accuracy of never being words in any language.
However, if I misread your question and you aren't generating the codes programmatically, you can ignore my response completely. :)
I have had the same thoughts. in trying to generate 6 character codes for a project i am doing.
I decided to reduce the likelyhood of obvious porfain codes So i removed the vowels that i found in as many "bad" words as i could think of, from my intial base 36 generation code. Leaving me with something more like a base 28 system that did not include a,e,i,o,u, 1,0. the one and zero were removed to reduce confusion between those characters in some fonts with I,L,O's
so far I have not seen a "profain" code genreated. Although base 28 has 1.something billion unique combinations.
i cannot vouch for other languages, and had not even considered it...
I am searching for an algorithm to determine whether realtime audio input matches one of 144 given (and comfortably distinct) phoneme-pairs.
Preferably the lowest level that does the job.
I'm developing radical / experimental musical training software for iPhone / iPad.
My musical system comprises 12 consonant phonemes and 12 vowel phonemes, demonstrated here. That makes 144 possible phoneme pairs. The student has to sing the correct phoneme pair 'laa duu bee' etc in response to visual stimulus.
I have done a lot of research into this, it looks like my best bet may be to use one of the iOS Sphinx wrappers ( iPhone App › Add voice recognition? is the best source of information I have found ). However, I can't see how I would adapt such a package, can anyone with experience using one of these technologies give a basic rundown of the steps that would be required?
Would training be necessary by the user? I would have thought not, as it is such an elementary task, compared with full language models of thousands of words and far greater and more subtle phoneme base. However, it would be acceptable (not ideal) to have the user train 12 phoneme pairs: { consonant1+vowel1, consonant2+vowel2, ..., consonant12+vowel12 }. The full 144 would be too burdensome.
Is there a simpler approach? I feel like using a fully featured continuous speech recogniser is using a sledgehammer to crack a nut. It would be far more elegant to use the minimum technology that would solve the problem.
So really I'm hunting for any open source software that recognises phonemes.
PS I need a solution which runs pretty much real-time. so even as they are singing the note, firstly it blinks on to illustrate that it picked up the phoneme pair that was sung, and then it glows to illustrate whether they are singing the correct note pitch
If you are looking for a phone-level open source recogniser, then I would recommend HTK. Very good documentation is available with this tool in the form of the HTK Book. It also contains an entire chapter dedicated to building a phone level real-time speech recogniser. From your problem statement above, it seems to me like you might be able to re-work that example into your own solution. Possible pitfalls:
Since you want to do a phone level recogniser, the data needed to train the phone models would be very high. Also, your training database should be balanced in terms of distribution of the phones.
Building a speaker-independent system would require data from more than one speaker. And lots of that too.
Since this is open-source, you should also check into the licensing info for any additional details about shipping the code. A good alternative would be to use the on-phone recorder and then have the recorded waveform sent over a data channel to a server for the recognition, pretty much something like what google does.
I have a little bit of experience with this type of signal processing, and I would say that this is probably not the type of finite question that can be answered definitively.
One thing worth noting is that although you may restrict the phonemes you are interested in, the possibility space remains the same (i.e. infinite-ish). User training might help the algorithms along a bit, but useful training takes quite a bit of time and it seems you are averse to too much of that.
Using Sphinx is probably a great start on this problem. I haven't gotten very far in the library myself, but my guess is that you'll be working with its source code yourself to get exactly what you want. (Hooray for open source!)
...using a sledgehammer to crack a nut.
I wouldn't label your problem a nut, I'd say it's more like a beast. It may be a different beast than natural language speech recognition, but it is still a beast.
All the best with your problem solving.
Not sure if this would help: check out OpenEars' LanguageModelGenerator. OpenEars uses Sphinx and other libraries.
http://www.hfink.eu/matchbox
This page links to both YouTube video demo and github source.
I'm guessing it would still be a lot of work to mould it into the shape I'm after, but is also definitely does do a lot of the work.