Why do languages need libraries? [closed] - libraries

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
Can't the languages just include the functions in them?
For example to use the sqrt function in Python you need to import the math library.
Why can't languages already have these functions built in?

Names are a scarce resource.
Would you want to be required to avoid using thousands of names, including things like max, set, read, and cycle?

As I understand, you have two very different questions and a very precise answer is not possible for either one.
Can't the languages just include the functions in them?
This part I am confused with if by this question, you mean explicit import
of a function in source file that programmer is need to do or is it just duplicate of question # 2 , that I already tried to answer.
Reasons for explicit import : To have option of multiple implementations of same logic and reduce application program executable size. e.g. a language implemented a function - sqrt is such a way that its slow and some other smart programmer wrote same method in more efficient way , wouldn't you like to use second option & not use language provide function ? That can be achieved only if programmer specifies that which sqrt , he / she meant to use.
Why can't languages already have these functions built in?
Because every piece of software needs to be maintained and continuously upgraded ( as per changing trends in computing ) by a set of people and everybody is constrained of resources esp. in open source environment. So what we do - we try to keep basic language software to minimal so it can easily be maintained and improved upon by core group - X while group - Y , group - Z can take care of non - essential / optional items. That way, scope of a language is limited. You should also know that languages contain lots of features which are rarely used.
A propriety & rich company like Microsoft might have a different thought process and they might put 1000 dedicated people to their language & try to include everything but most popular languages originated & still live in non - corporate environment.
Other reason is giving flexibility to programmer as already explained. A language which provides you everything and asks you to use only those functions would be very inflexible.
If you put in the complexity of business domains like something specific for Aerospace , something specific for healthcare etc etc , scope gets very unlimited very easily.
Usually, a software is divided into two parts - core part & optional patches ( modules ) to achieve better maintainability , flexibility and reduces software size on need basis.

Related

Concept Based Text Summarization (Abstraction) [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
I am looking for an engine that does AI text summarization based on the concept or meaning of the sentence, I looked at open-source projects like (ginger, paraphrase, ace) but they don't do the job.
The way they work is that they try to find synonyms for each word and replace with the current words, this way they generate alot of alternatives to a sentence but the meaning is wrong most of the times.
I have worked with Stanford's engine to do something like highlights to an article and based on that extract the most important sentences, but still this is not abstraction, its extraction.
It would also make sense that the engine I'm looking for learns over time and results are improved after each summary.
Please help out here, your help is greatly appreciated!
I don’t know any open source project which fits your requirements about abstraction and a meaning as I assume.
But I have an ideas how to build such engine and how to train it.
In a few words I think we all keep in mind some Bayesian-network like structure in our minds, with helps us not only to classify some data, but also to form an abstract meaning about text or message.
Since it is impossible to extract all that abstract categories structure from our mind I think it’s better to build mechanism which allow as to reconstruct it step-by-step.
Abstract
The key idea of the proposed solution is in the extraction of meaning of a conversation using approaches which easier in operation with it from an automated computer system. This will allow creating the good level of illusion of real conversation with another person.
Proposed model supports two levels of abstraction:
First of them, less complex level consists in the recognition of groups of words or a single word as a group which related to the category, instance or to the instance attribute.
Instance means instantiation from the general category of the real or abstract subject, object, action, attribute or other kind of instances. As an example – concrete relation between two or more subjects: concrete relations between employer and employee, concrete city and country where it’s situated and so on.
This basic meaning recognition approach allows us to create bot with ability sustain a conversation. This ability based on recognition of basic elements of meaning: categories, instances and instances attributes.
Second, the most complicated method based on scenario recognition and storing them into the conversation context with instances/categories as well as using them for completion some of recognized scenarios.
Related scenarios will be used to complete the next message of the conversation as well as some of scenarios can be used to generate the next message or for recognizing meaning element by using of conditions and by using meaning elements from the context.
Something like that:
Basic classification should be entered manually and with future correction/addition of the teachers.
Words from sentence in conversation and scenarios from sentence can be filled from context
Conversation scenarios/categories can be fulfilled by previously recognized instances or with instances described in future conversation (self-learning)
Pic 1 – word detection/categorization basically flow vision
Pic 2 – general system vision big picture view
Pic 3 - meaning element classification
Pic 4 – basically categories structure could be like that

Should units of measurement be localized? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 11 days ago.
Improve this question
I am working on an app featuring some measurements entered by human operators. In the config section, an admin enters which measurements and which units they want to use, among other things. The kinds of units anticipated are very diverse and not able to be fully defined. So the plan is to let the admin enter the units in free-form instead of using a select box.
OK so far. But elsewhere, we are displaying the units when the app is localized into one of several different languages. The possible range of languages is known by the app from the beginning.
I'm looking for ideas on how best to handle the entering and displaying of units. I'm by no means a linguistic expert, but I imagine different languages have their own ways of representing the same units, which would imply that if we use free-form text entry, the admin would have to enter the unit's translation into each language. We do that with other kinds of text fields in the app, so it's not a huge problem from a coding standpoint.
But I'm wondering how others handle this kind of situation. It'd be a lot easier NOT to translate the units. But is that reasonable? FWIW, both the admins and the end-users of this system are typical consumers, not necessarily scientists or other analytic types. Also, we need to avoid having our software dependent on 3rd-party services like Google Translate.
The units themselves are partly culturally dependent, like metric vs. imperial vs. US units. If you intend to allow any units, you would in a sense carry out ultimate localization of unit: the individual user would decide on units. I suppose you would still recognize a finite set of units, somehow.
The symbols for units are a different issue. If SI units are used, their symbols are in principle the same across languages and cultures. But there are differences in practice; e.g., in Russia, it is normal to use Russian abbreviations (in Cyrillic letters) instead of standard symbols, e.g. кг and not kg. Moreover, if users can enter units by name, the names need to be localized (even though they tend to be similar, like meter ~ metre ~ Meter ~ metri ~ метр, they are not identical). And many non-SI units don’t even have standardized symbols.
So set of unit symbols or names recognized would need to be language-dependent.
No, they are international when using SI unit
It comes to my mind that french speaking countries use o for Octet instead of B like byte. But that is a Unit and not an SI-Prefix.
I also observed in cyryllic writing countries that they replace k, M, G with their respective glyphe. I don't know it this is just convenience and also accepted or if this is the actual way. I just beleive it is convenience and it's supposed to be actually the SI normed latin letters for SI-Prefixes. And therefore the latin letters would be correct (at least, too). (IMHO)

Machine learning - Helmholtz Machine implementation [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
I am looking for a implementation of the Helmholtz Machine.
References:
http://www.gatsby.ucl.ac.uk/~dayan/papers/hm95.pdf
http://www.cs.toronto.edu/~hinton/absps/helmholtz.pdf
I am looking for open source or free implementations. I have preferences for Java implementations, but implementations in other languages (c, C++, c# or Python, mainly) will help me.
In my search in the web i have found only abstrac descriptions of this approach, withou any concrete implementation. My hope is found any expert in the subject that have more information about.
Deeplearning4j is an open-source implementation of various deep-learning machines that Hinton might classify as "Helmholtz." http://deeplearning4j.org/
I have had a quick look at this on the link you gave.
I have been "working on my own with a small team on AI sentience" ..... since 1968 !!!
My thoughts are as follows:
All events happen in a "time series".
There is a past time series that has a probability "high" as far as the sentient observer is concerned.
There is a future "predicted" time series predicted ahead on the "best" (time series) model the sentient observer can create and as the time series disappears into the future the probability of that time series "becoming the past time series" diminishes down towards zero and that could occur in milliseconds or in billions of years - depending on the model dynamics.
I do not think there ever is "a present time".
Unfortunately - after studying Kalman Filters and Predictors and utilising them in missile targeting I have concluded that the whole "topic" of "mathematically representing" the best algorithms (i.e. models) that humans could come up with was a waste of time as even the simplest "program" is doing a task that could not be represented by mathematical symbols... and so I have concluded that "computer algorithms" "ARE" mathematical formulas ... i.e. formulas that normal symbolic mathematics does not have the tools to describe (i.e. programs are superior to a complex mathematical systems of notation).
Mathematics is fine for "proofs" and "big statistical ideas" but ... (and i am getting near the end now) ... i would "trust" your own instincts to create a "model" that predicts the future best .... i.e it might have to have the concept of "on alternate Wednesdays in the US", in it ... and also thousands of other such non-mathematical "states" or various "axioms" ... which is fine !
so how you ask could this be mathematically correct !!!!
Well the answer is quite simple really >> the best model - is the best model at predicting the future !
And the future keeps popping up surprisingly often - and so it's easy to test - and keep testing !
All you need to know that you have the best "mathematics" (i.e. program) is to see how much "noise" or "deviation from prediction" exists in the prediction vs the actual outcome in the time series.
"State-Space" is the best "maths" to use for this ... i.e. assume that there is an "underlying state" and then assume that your "observations" are just flawed "noisy or just wrong" observations of that underlying state - i.e. the system output signals are "somehow" based on these "invisible" internal system states.
There is an AI sentience "computer language" called MTR that we created (mainly in the 1980's) which is designed for this sort of dynamic model creation - but the down side for us (humans) is that it is designed for IA entities to use and not humans although we are going to put a "Pascal Like" front end onto it soon to allow normal humans to use it. IBM, Intel, GCHQ, MOD, DOD etc all had licences - but we then shelved it !
We intend to re-start the project soon.
Anyway, that's what i think - i hope it is not too abstract for your purposes !
We could say ... (and in this i am joking) .... that programmers that try to use "pure mathematics" to write programs "have the horns by the bull" ?
So hopefully programmers can be much more relaxed when they do not to understand the entirety of all the maths !!!
I hope that thought might also help any "non-maths" readers .... of this response.

The Possibility of a "True" Quiz Generator [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I've been given an assignment to create a program that might involve a quiz generator. I decided to come to you guys since you seem to be the most helpful.
Is an automatic quiz generator possible?
Is it that automatic, or do you have to enter your own questions and correct answers?
Can it work for other things rather than boolean answers (true and false)?
Can it observe text syntax so that it can create questions based on a paragraph of information?
can it observe text syntax so that it can accept answers that are close to the right answer, but is off by a few words?
This would be very helpful if you could help me, as this question has me stumped right now.
You guys always come through though, so I await your answer :D!
P.S. - I've seen other questions like this, but it covered only stuff like randomization. I believe that would be possible, but I'm wondering if "true" generators are possible.
Is an automatic quiz generator possible?
It depends on what you call automatic, and what you consider a successful level of functionality. Something is definitely possible.
Is it that automatic, or do you have to enter your own questions and correct answers? Can it observe text syntax so that it can create questions based on a paragraph of information?
Yes, that's possible, but again there's a spectrum from only working for the simplest text and being easily confused (which is relatively easy to program - even a regular expression parser could do that), through to handling arbitrary real-world textual sources and getting say 80%+ of the facts out of the text and posing sensible questions for which it correctly identified the answer (which might take a team of 100 language and programming experts decades). Language analysis is difficult. If you want proof - try converting a paragraph of English text to another language using Babelfish or similar online translator, then convert it back... :-).
Can it work for other things rather than boolean answers (true and false)?
Of course, but again the more complex you make it, the less likely you'll get anything that works...
can it observe text syntax so that it can accept answers that are close to the right answer, but is off by a few words?
It could, but the range of ways someone might phrase an answer is so varied that having to follow a simple template with a few words' tolerance wouldn't work well in general use.
General thoughts
Why don't you search for existing educational quiz programs to get an idea of what other people have achieved...?
I would make an automatic math quiz generator, as a simple example.
Questions could be generated easily, just come up with 2 random numbers that fit certain characteristics, and randomly add/subtract/multiply them. Then mathematically add them together.
But, for non-math subjects, a quiz generator would be more difficult, it would need some kind of a database to draw from of sample questions.
Is an automatic quiz generator possible?
Yes, an automatic quiz general is possible.
Is it that automatic, or do you have to enter your own questions and correct answers?
You could make it automated, but that would require access to a large database and very complex data mining algorithms. If it's an assignment, you would probably be better off having it take in questions and their corresponding answers. A mathematics quiz generator would be much easier to implement, as it would only require random operators and operands placed in the correct sequence.
Can it work for other things rather than boolean answers (true and false)?
This depends entirely on your implementation, but theoretically yes.
Can it observe text syntax so that it can create questions based on a paragraph of information?
If you have an awesome data mining script and resources to form grammatically-correct sentences with raw information, then yes.
can it observe text syntax so that it can accept answers that are close to the right answer, but is off by a few words?
Producing an algorithm to reliably evaluate different sentences with the same meanings as the same would be very difficult. You would need to account for spelling and grammatical errors as well as synonyms and many other factors. Furthermore, it would be very language (not programming language) dependent.
I hope this answered some of your questions.

RUP (Rational Unified Process) [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I have chosen to use the development method RUP (Rational Unified Process) in my project. This is a method I've never used before. I've also included some elements from Scrum in the development process. The question is what the requirement specifications should contain in a RUP-model? Is it functional and non-functional requirements? And what should be included in a technical analysis and security requirements for RUP? Can’t find any information. Notes about this would be helpful.
Hope people with RUP experience can share some useful experiences
RUP has 3 main parts:
Roles
Activities
Work Products
Each ROLE do an ACTIVITY and as a result a produce a WORK PRODUCTS...
For example Analyst [Role] Develop Vision [Activity] as a result we will have Vision [Work Product]...
Besides this RUP gives us some GUIDELINES and CHECKLIST to do right our ACTIVITY and WORK PRODUCTS...
RUP gives us templates for WORK PRODUCTS but they are just to give an idea what they may be look like...
Suppose for vision you can use RUP template but you can just use a post-it notes and just write an "elavator statement" like this:
For [target customer] Who [statement of the need or opportunity] The
(product name) is a [product category] That [statement of key
benefit; that is, the compelling reason to buy] Unlike [primary
competitive alternative] Our product [statement of primary
differentiation]
Even Work products can be simple statements that you write on your WIKI...They can be in any form...
They must not be "static written" docs... They can even be "video" .
Suppose instead of writing Softaware Architecture docs [Architecture Notebook in OpenUP] you can just create a video in which your team explain main architecture on white board....
****WARNING FOR RUP WORKPRODUCTS TEMPLATES:**
DO NOT BECAME A TEMPLATE ZOMBIE.YOU SHOULD NOT FILL EVER PARTS OF IT...
YOU SHOULD ASK YOURSELF, WHAT KIND OF BENEFIT WILL I GET BY WRITING THIS...IF YOU HAVE NO VALID ANSWER, DO NOT WRITE...
DOCUMENTATION SHOULD HAVE REAL REASONS, DO NOT MAKE DOCUMENTATION JUST FOR "DOCUMENTATION"...**
RUP has rich set of WORK PRODUCTS...So chose minumum of them which you will get most benefit...
For a typical projects generally you will have those Requirements Work Products:
Vision : What we do and Why we do? Agrement of StakeHolders...
Suplemantary Specification [ System-Wide Requirements in OpenUP] :
Generally capture non-functional [ which the term i do not like] or
"quality" [ which i like"] requirements of system.
Use-Case Model : Capture function requirements as Use-Cases
Glossary : To make concepts clear...
RUP is commercial but OpenUP is not...So you can look OpenUP WORK PRODUCTS templates just to get an idea what kind of info is recorded in them...
Download it from and
Eclipse Process Framework Project http://www.eclipse.org/epf/downloads/configurations/pubconfig_downloads.php and start reading from index page:
...-->
...--->
--->
----->
--->
....>.........................................
---->.......................................
Lastly you can find usage of those WORK PRODUCTS in an agile manner at Larman book Applying UML and Patterns...
And again : DO NOT BECAME A TEMPLATE ZOMBIE!!!
Try the Rational Unified Process page at Wikipedia for an overview.
The core requirements should be documented in the project description. RUP tends to place a lot of emphasis on "use cases", however it is very important not to lose sight of the original requirements at all levels of detail, because these will answer the "Why?" questions. If the developers only see the uses cases, they will know What they are supposed to build (effectively the functional requirements) but not Why it is required. Unless the developers have easy access to the original analysts, this can cause very serious problems.

Resources