Programmable Logic Devices - signal-processing

I have a confusion in understanding the structure of PAL device.
My first question is that if we buy a PAL device , then how can we know that how many min terms are added by each OR gate in the OR array? In other words I am asking, is there any standard by which we can know the number inputs each OR gate has in the OR array?
The next thing is that we have an AND array in the PAL device which is programmable. Now suppose we have 4 inputs , then each AND gate in the AND array must need 8 inputs. It is up to us how many variables we apply on it, but there is a possiblity that we can apply all the variables on the AND gate therefore it should have 8 inputs. Please tell me am i right or not. If not then please explain.

I think there is no universal standard for either of your questions. The data-sheet for each device specifies those parameters. You should look up the data-sheets and decide what suits your needs.
Specifically on your second question, an ideal PAL should be as you say (like this simplified circuit). But usually you don't want to apply all the variables (and their negations) to the AND gates, so each AND gate can have less inputs (of course using the grid you can choose any of the variables to apply, just not all of them together).

Related

Is there a way to have a formula or script pick an amount of pre-set lengths to cover an area

Apologies if the title isn't very clear.
What I am trying to do is get a google sheet to automatically calculate how many lengths of a material I will need to cover an area, hopefully to include a mix if needed. There are three different lengths of material that never change, but the total area I need to cover changes on a case by case basis. It is only a straight line so there is no need to worry about width or height.
The data breaks down as follows:
Pre-set lengths to choose from
10'6"
12'6"
14'6"
Length of area I need to cover only comes in inches (ie. 68 1/2"; 70"; 59")
The only thing I have been successful in doing is getting the length I need to cover and then manually picking out how many pieces of each length I need, but I cannot think of any way for me to have a formula or script optimize how many of each piece I need. I can understand formulas well enough, but once trying to script anything comes into play I start getting lost. I believe this issue may be beyond the capabilities of formulas.
This is an interesting problem - I don't have the 'reputation' required to comment, but to be clear: you're actually trying to find the 'best fit' of the available lengths to cover the required length?
If that's the case then yes, you're not going to get there without scripting. Fortunately, there are other folks who have this problem and have solved it... you could look at this online cut-list calculator for an example. I think that one even includes an embeddable script for your sheets.
If you're looking to solve the problem yourself because it's interesting, googling 'optimal cut list' or the like will turn up references. Usually you're optimizing on two variables (e.g. 'fewest joins' and 'least waste'), which tips you over into the world of linear programming (only just...) if you want to go there. If it were me, I'd just dig up a few example scripts and map how they operate back to a theoretical description (e.g. this wiki article.)

How to tell the different SOMs apart?

I have been handed many GC Dev boards and SOMs at work but there seems to be no external way to tell them apart from looking at part/model numbers. My coworkers are disorganized and have mixed three different orders together and we cannot tell which is which anymore. I could turn them all on and look in the terminal but certainly there must be an easier way. I dont know why there is no label or discerning factor on the packages or SOM itself. The Datasheet says this only: These numbers don't correlate to any numbers on the actual SOM Board I have a model number that's the same for all of them, "AA1" but at the top is "09JF001TK" which contains the date it was manufactured but I cannot decipher what the other letters/numbers mean. I sent in a support request but have not heard back, hope you can help. Both QR codes don't seem to yield any results

Mahout recommendations with metadata related to preference

I was planning to write a recommender which treats preferences differently depending contextual information (time the preference was made, device used to make the recommendation, ...)
Within the Mahout in Action book and within the code examples shipped with Mahout I can't seem to find anything related. In some examples to there's metadata (a.k.a content) used to express user or item similarity - but that's not what I'm looking for.
I wonder if anyone already made an attempt to do sth similar with Mahout?
Edit:
A practical example could be that the current session is done on a mobile device and this should cause a push up (rating*1.1) for all preferences tracked on mobile devices and a drop for preferences tracked differently (rating*0.9).
...
Another example could be that some ratings are collected implicit and others explicit. How would I be able to keep track of this fact without "coding" that directly into the tracked value and how would I be able to use that information when calculating the scores?
I would say one approach is to use the Rescorer class to do just that, but my guess is that this is what you are referring to when you say that's not what you are looking for.
Another approach would be to pre-process the entire data you have to adjust the preferences according to your needs, before using Mahout to generate recommendations.
If you provide some more detail on how you expect to use your data to modify preferences, people here would be able to help even further.

Settings.bundle, input validation

Is it possible to specify validation rules for a particular entry in a Settings.bundle? For instance, would it be possible to restrict a text field to a particular set of characters? Or do I have to alert the user if there is "garbage" in the configuration?
I believe you'll have to alert the user that they entered bad data. There isn't any way to add your own code to the settings bundle beyond the plist values.
Boy, this is an old question, and...boy, things haven't improved.
The settings metaphor -where Apple recommends that you stick ALL your app settings, BTW- is a usability train wreck.
For example, I needed to add a discrete set of integers as a range of values (1 - 7). The default is an Int.
Looks like the only UI element available is a slider.
With continuous range, and no markings.
Oh, well. There's no way in hell that I'd ever use this as a place to put general-purpose app settings; only rather obscure ones that hardly ever need to be tweaked.

iOS / C: Algorithm to detect phonemes

I am searching for an algorithm to determine whether realtime audio input matches one of 144 given (and comfortably distinct) phoneme-pairs.
Preferably the lowest level that does the job.
I'm developing radical / experimental musical training software for iPhone / iPad.
My musical system comprises 12 consonant phonemes and 12 vowel phonemes, demonstrated here. That makes 144 possible phoneme pairs. The student has to sing the correct phoneme pair 'laa duu bee' etc in response to visual stimulus.
I have done a lot of research into this, it looks like my best bet may be to use one of the iOS Sphinx wrappers ( iPhone App › Add voice recognition? is the best source of information I have found ). However, I can't see how I would adapt such a package, can anyone with experience using one of these technologies give a basic rundown of the steps that would be required?
Would training be necessary by the user? I would have thought not, as it is such an elementary task, compared with full language models of thousands of words and far greater and more subtle phoneme base. However, it would be acceptable (not ideal) to have the user train 12 phoneme pairs: { consonant1+vowel1, consonant2+vowel2, ..., consonant12+vowel12 }. The full 144 would be too burdensome.
Is there a simpler approach? I feel like using a fully featured continuous speech recogniser is using a sledgehammer to crack a nut. It would be far more elegant to use the minimum technology that would solve the problem.
So really I'm hunting for any open source software that recognises phonemes.
PS I need a solution which runs pretty much real-time. so even as they are singing the note, firstly it blinks on to illustrate that it picked up the phoneme pair that was sung, and then it glows to illustrate whether they are singing the correct note pitch
If you are looking for a phone-level open source recogniser, then I would recommend HTK. Very good documentation is available with this tool in the form of the HTK Book. It also contains an entire chapter dedicated to building a phone level real-time speech recogniser. From your problem statement above, it seems to me like you might be able to re-work that example into your own solution. Possible pitfalls:
Since you want to do a phone level recogniser, the data needed to train the phone models would be very high. Also, your training database should be balanced in terms of distribution of the phones.
Building a speaker-independent system would require data from more than one speaker. And lots of that too.
Since this is open-source, you should also check into the licensing info for any additional details about shipping the code. A good alternative would be to use the on-phone recorder and then have the recorded waveform sent over a data channel to a server for the recognition, pretty much something like what google does.
I have a little bit of experience with this type of signal processing, and I would say that this is probably not the type of finite question that can be answered definitively.
One thing worth noting is that although you may restrict the phonemes you are interested in, the possibility space remains the same (i.e. infinite-ish). User training might help the algorithms along a bit, but useful training takes quite a bit of time and it seems you are averse to too much of that.
Using Sphinx is probably a great start on this problem. I haven't gotten very far in the library myself, but my guess is that you'll be working with its source code yourself to get exactly what you want. (Hooray for open source!)
...using a sledgehammer to crack a nut.
I wouldn't label your problem a nut, I'd say it's more like a beast. It may be a different beast than natural language speech recognition, but it is still a beast.
All the best with your problem solving.
Not sure if this would help: check out OpenEars' LanguageModelGenerator. OpenEars uses Sphinx and other libraries.
http://www.hfink.eu/matchbox
This page links to both YouTube video demo and github source.
I'm guessing it would still be a lot of work to mould it into the shape I'm after, but is also definitely does do a lot of the work.

Resources