RUP (Rational Unified Process) [closed] - methodology

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I have chosen to use the development method RUP (Rational Unified Process) in my project. This is a method I've never used before. I've also included some elements from Scrum in the development process. The question is what the requirement specifications should contain in a RUP-model? Is it functional and non-functional requirements? And what should be included in a technical analysis and security requirements for RUP? Can’t find any information. Notes about this would be helpful.
Hope people with RUP experience can share some useful experiences

RUP has 3 main parts:
Roles
Activities
Work Products
Each ROLE do an ACTIVITY and as a result a produce a WORK PRODUCTS...
For example Analyst [Role] Develop Vision [Activity] as a result we will have Vision [Work Product]...
Besides this RUP gives us some GUIDELINES and CHECKLIST to do right our ACTIVITY and WORK PRODUCTS...
RUP gives us templates for WORK PRODUCTS but they are just to give an idea what they may be look like...
Suppose for vision you can use RUP template but you can just use a post-it notes and just write an "elavator statement" like this:
For [target customer] Who [statement of the need or opportunity] The
(product name) is a [product category] That [statement of key
benefit; that is, the compelling reason to buy] Unlike [primary
competitive alternative] Our product [statement of primary
differentiation]
Even Work products can be simple statements that you write on your WIKI...They can be in any form...
They must not be "static written" docs... They can even be "video" .
Suppose instead of writing Softaware Architecture docs [Architecture Notebook in OpenUP] you can just create a video in which your team explain main architecture on white board....
****WARNING FOR RUP WORKPRODUCTS TEMPLATES:**
DO NOT BECAME A TEMPLATE ZOMBIE.YOU SHOULD NOT FILL EVER PARTS OF IT...
YOU SHOULD ASK YOURSELF, WHAT KIND OF BENEFIT WILL I GET BY WRITING THIS...IF YOU HAVE NO VALID ANSWER, DO NOT WRITE...
DOCUMENTATION SHOULD HAVE REAL REASONS, DO NOT MAKE DOCUMENTATION JUST FOR "DOCUMENTATION"...**
RUP has rich set of WORK PRODUCTS...So chose minumum of them which you will get most benefit...
For a typical projects generally you will have those Requirements Work Products:
Vision : What we do and Why we do? Agrement of StakeHolders...
Suplemantary Specification [ System-Wide Requirements in OpenUP] :
Generally capture non-functional [ which the term i do not like] or
"quality" [ which i like"] requirements of system.
Use-Case Model : Capture function requirements as Use-Cases
Glossary : To make concepts clear...
RUP is commercial but OpenUP is not...So you can look OpenUP WORK PRODUCTS templates just to get an idea what kind of info is recorded in them...
Download it from and
Eclipse Process Framework Project http://www.eclipse.org/epf/downloads/configurations/pubconfig_downloads.php and start reading from index page:
...-->
...--->
--->
----->
--->
....>.........................................
---->.......................................
Lastly you can find usage of those WORK PRODUCTS in an agile manner at Larman book Applying UML and Patterns...
And again : DO NOT BECAME A TEMPLATE ZOMBIE!!!

Try the Rational Unified Process page at Wikipedia for an overview.
The core requirements should be documented in the project description. RUP tends to place a lot of emphasis on "use cases", however it is very important not to lose sight of the original requirements at all levels of detail, because these will answer the "Why?" questions. If the developers only see the uses cases, they will know What they are supposed to build (effectively the functional requirements) but not Why it is required. Unless the developers have easy access to the original analysts, this can cause very serious problems.

Related

Why do languages need libraries? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
Can't the languages just include the functions in them?
For example to use the sqrt function in Python you need to import the math library.
Why can't languages already have these functions built in?
Names are a scarce resource.
Would you want to be required to avoid using thousands of names, including things like max, set, read, and cycle?
As I understand, you have two very different questions and a very precise answer is not possible for either one.
Can't the languages just include the functions in them?
This part I am confused with if by this question, you mean explicit import
of a function in source file that programmer is need to do or is it just duplicate of question # 2 , that I already tried to answer.
Reasons for explicit import : To have option of multiple implementations of same logic and reduce application program executable size. e.g. a language implemented a function - sqrt is such a way that its slow and some other smart programmer wrote same method in more efficient way , wouldn't you like to use second option & not use language provide function ? That can be achieved only if programmer specifies that which sqrt , he / she meant to use.
Why can't languages already have these functions built in?
Because every piece of software needs to be maintained and continuously upgraded ( as per changing trends in computing ) by a set of people and everybody is constrained of resources esp. in open source environment. So what we do - we try to keep basic language software to minimal so it can easily be maintained and improved upon by core group - X while group - Y , group - Z can take care of non - essential / optional items. That way, scope of a language is limited. You should also know that languages contain lots of features which are rarely used.
A propriety & rich company like Microsoft might have a different thought process and they might put 1000 dedicated people to their language & try to include everything but most popular languages originated & still live in non - corporate environment.
Other reason is giving flexibility to programmer as already explained. A language which provides you everything and asks you to use only those functions would be very inflexible.
If you put in the complexity of business domains like something specific for Aerospace , something specific for healthcare etc etc , scope gets very unlimited very easily.
Usually, a software is divided into two parts - core part & optional patches ( modules ) to achieve better maintainability , flexibility and reduces software size on need basis.

Concept Based Text Summarization (Abstraction) [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
I am looking for an engine that does AI text summarization based on the concept or meaning of the sentence, I looked at open-source projects like (ginger, paraphrase, ace) but they don't do the job.
The way they work is that they try to find synonyms for each word and replace with the current words, this way they generate alot of alternatives to a sentence but the meaning is wrong most of the times.
I have worked with Stanford's engine to do something like highlights to an article and based on that extract the most important sentences, but still this is not abstraction, its extraction.
It would also make sense that the engine I'm looking for learns over time and results are improved after each summary.
Please help out here, your help is greatly appreciated!
I don’t know any open source project which fits your requirements about abstraction and a meaning as I assume.
But I have an ideas how to build such engine and how to train it.
In a few words I think we all keep in mind some Bayesian-network like structure in our minds, with helps us not only to classify some data, but also to form an abstract meaning about text or message.
Since it is impossible to extract all that abstract categories structure from our mind I think it’s better to build mechanism which allow as to reconstruct it step-by-step.
Abstract
The key idea of the proposed solution is in the extraction of meaning of a conversation using approaches which easier in operation with it from an automated computer system. This will allow creating the good level of illusion of real conversation with another person.
Proposed model supports two levels of abstraction:
First of them, less complex level consists in the recognition of groups of words or a single word as a group which related to the category, instance or to the instance attribute.
Instance means instantiation from the general category of the real or abstract subject, object, action, attribute or other kind of instances. As an example – concrete relation between two or more subjects: concrete relations between employer and employee, concrete city and country where it’s situated and so on.
This basic meaning recognition approach allows us to create bot with ability sustain a conversation. This ability based on recognition of basic elements of meaning: categories, instances and instances attributes.
Second, the most complicated method based on scenario recognition and storing them into the conversation context with instances/categories as well as using them for completion some of recognized scenarios.
Related scenarios will be used to complete the next message of the conversation as well as some of scenarios can be used to generate the next message or for recognizing meaning element by using of conditions and by using meaning elements from the context.
Something like that:
Basic classification should be entered manually and with future correction/addition of the teachers.
Words from sentence in conversation and scenarios from sentence can be filled from context
Conversation scenarios/categories can be fulfilled by previously recognized instances or with instances described in future conversation (self-learning)
Pic 1 – word detection/categorization basically flow vision
Pic 2 – general system vision big picture view
Pic 3 - meaning element classification
Pic 4 – basically categories structure could be like that

Machine learning - Helmholtz Machine implementation [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
I am looking for a implementation of the Helmholtz Machine.
References:
http://www.gatsby.ucl.ac.uk/~dayan/papers/hm95.pdf
http://www.cs.toronto.edu/~hinton/absps/helmholtz.pdf
I am looking for open source or free implementations. I have preferences for Java implementations, but implementations in other languages (c, C++, c# or Python, mainly) will help me.
In my search in the web i have found only abstrac descriptions of this approach, withou any concrete implementation. My hope is found any expert in the subject that have more information about.
Deeplearning4j is an open-source implementation of various deep-learning machines that Hinton might classify as "Helmholtz." http://deeplearning4j.org/
I have had a quick look at this on the link you gave.
I have been "working on my own with a small team on AI sentience" ..... since 1968 !!!
My thoughts are as follows:
All events happen in a "time series".
There is a past time series that has a probability "high" as far as the sentient observer is concerned.
There is a future "predicted" time series predicted ahead on the "best" (time series) model the sentient observer can create and as the time series disappears into the future the probability of that time series "becoming the past time series" diminishes down towards zero and that could occur in milliseconds or in billions of years - depending on the model dynamics.
I do not think there ever is "a present time".
Unfortunately - after studying Kalman Filters and Predictors and utilising them in missile targeting I have concluded that the whole "topic" of "mathematically representing" the best algorithms (i.e. models) that humans could come up with was a waste of time as even the simplest "program" is doing a task that could not be represented by mathematical symbols... and so I have concluded that "computer algorithms" "ARE" mathematical formulas ... i.e. formulas that normal symbolic mathematics does not have the tools to describe (i.e. programs are superior to a complex mathematical systems of notation).
Mathematics is fine for "proofs" and "big statistical ideas" but ... (and i am getting near the end now) ... i would "trust" your own instincts to create a "model" that predicts the future best .... i.e it might have to have the concept of "on alternate Wednesdays in the US", in it ... and also thousands of other such non-mathematical "states" or various "axioms" ... which is fine !
so how you ask could this be mathematically correct !!!!
Well the answer is quite simple really >> the best model - is the best model at predicting the future !
And the future keeps popping up surprisingly often - and so it's easy to test - and keep testing !
All you need to know that you have the best "mathematics" (i.e. program) is to see how much "noise" or "deviation from prediction" exists in the prediction vs the actual outcome in the time series.
"State-Space" is the best "maths" to use for this ... i.e. assume that there is an "underlying state" and then assume that your "observations" are just flawed "noisy or just wrong" observations of that underlying state - i.e. the system output signals are "somehow" based on these "invisible" internal system states.
There is an AI sentience "computer language" called MTR that we created (mainly in the 1980's) which is designed for this sort of dynamic model creation - but the down side for us (humans) is that it is designed for IA entities to use and not humans although we are going to put a "Pascal Like" front end onto it soon to allow normal humans to use it. IBM, Intel, GCHQ, MOD, DOD etc all had licences - but we then shelved it !
We intend to re-start the project soon.
Anyway, that's what i think - i hope it is not too abstract for your purposes !
We could say ... (and in this i am joking) .... that programmers that try to use "pure mathematics" to write programs "have the horns by the bull" ?
So hopefully programmers can be much more relaxed when they do not to understand the entirety of all the maths !!!
I hope that thought might also help any "non-maths" readers .... of this response.

Best Practice: Order of application design [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I can think of quite a few components that need to be created when authoring a web application. I know it should probably be done incrementally, but I'd like to see what order you usually tackle these tasks in. Layout your usual order of events and some justification.
A few possible components or sections I've thought of:
Stories (i.e. pivotaltracker.com)
Integration tests (Rspec, Cucumber, ...)
Functional tests
Unit Tests
Controllers
Views
Javascript functionality
...
The question is, do you do everything piecemeal? (one story, one integration test, get it passing, move onto the next one, ...) OR complete all of one component first then move onto the next.
I'm a BDDer, so I tend to do outside-in. At a high level that means establishing the project vision first (you'd be amazed how few companies actually do this), identifying other stakeholders and their goals (legal, architecture, etc.) then breaking things down into feature sets, features and stories. A story is the smallest usable piece of code on which we can get feedback, and it may be associated with one or more scenarios. This is what Chris Matts calls "feature injection" - creating features because they are needed to support stakeholder goals and the project vision. I wrote an article about this a while back. I justify this because regardless of how good or well-tested your code is, it won't matter if it's the wrong code in the first place.
Once we have the story and scenarios, I tend to write the UI first, followed by the classes which support it. I wrote a blog post about a real-life example here - we were programming in Java so you might have to do things a bit differently with Rails, but the principles remain. I tend to start writing unit tests when there's actually behaviour to describe - that is, a class behaves differently depending on its context, on what has already happened before. Normally the first class will indeed be the controller, which I tend to populate with static data just to get the UI into shape. I'll write the first unit tests to help me get rid of that static data.
Doing the UI first lets me get feedback from stakeholders early, since it's the UI that the users will be interacting with. I then start with the "happy path" - the thing which lets the users do the most valuable thing - followed by the exceptional cases, validation, etc.
Then I do my best to persuade my PM to let us release our code early, because it's only when the users actually get hold of it to play with that you find out what you really did wrong.

Does anyone have a good analogy for dependency injection? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I have read a lot of articles on Dependency Injection as well as watched a lot of videos, but I still can't get my head around it. Does anyone have a good analogy to explain it?
I watched the first part of the Autumn of Agile screencast and still was a little confused.
Analogy? I'll give it a whack... Your CD Player stereo is useless without a CD with music on it... (It's dependent on the CD). If they built CD Players with the CD already in it, it would get boring very quickly...
So they build them so you can "inject" the CD, (on which it is dependent) into the player. That way you can inject a different one each time, and get "different" behavior (music) dependent on which one you inject.
The only requirement is that the CD must be compatible with the interface defined by the player. (You can't play a blue-ray disk in a 1992 CD player.)
The best analogy I can think of is that of hiring a mechanic.
Without dependency injection, you hire a mechanic and the mechanic brings his own tools. He may have lousy tools, he may have great tools, he may be using a pipe wrench when he should be using a socket. You don't know, and may not care, so long as he gets the work done.
With dependency injection, you hire a mechanic and you provide him with the tools that you want him to do his work with. You get to choose what you consider to be the best or most appropriate tools for the work you are hiring him to do.
Think of it as a realisation of the "Inversion of Control" pattern. I guess, your problem is, you are so used to it, you don't realize it's that simple.
Let's start at the beginning.
In the early days programs followed a given path through the code. The order of the called functions was given by the programmer.
In interactive programs, e.g. mostly ANY program, you can not say, which function is called at what time. Just look at a GUI or website. You can not say, at what time what button or link is clicked. So the "control" of what's happening is no longer at the program, it's at an outer source. The "control" has been inverted. The function is no longer "acting" it is instead "listening". Think of the hollywood principle: "Don't call us, we call you". A listener is a good example for a realisation of this pattern.
IoC is realized by functions or "methods" in the "object oriented world" of today.
"Dependency Injection" now means the same, but not for "methods", which do something, but for "objects", which hold data.
The data is no longer part of the object holding it. It is "injected" into the object at runtime. To stay in hollywood, think of a film star, playing golf to talk about the business, but to keep in shape, she hungers herself down, minimizing her muscle weight and therefore she is only able to carry one club at a time.
So, on the golf course her game would heavily depend on the one club, she is carrying.
Lucky for her, there are caddies, carrying a whole lot of clubs at one time, and also having the knowledge what club to use at what time. Now she is independent of her limited possibility to carry golf clubs. "Don't think about a concrete club to wear, we know them all and give you the right one at the right time".
The film star is the object and the golf clubs are the members of the object. That's dependency injection.
Maybe focus on the "injection" part? When I see that term, I think of syringes. The process of pushing the dependencies of a component to the component can be thought of as injecting into the component.
Just like with the body, when there is something that it needs in the way of medicine (a component that it needs) you can inject it into the body.
In their 2003 JavaPolis presentation (slides), Jon Tirsén & Aslak Hellesøy had an amusing analogy with a Girl object that needs a Boy to kiss. I seem to remember that the BoyFactory is sometimes known as a 'nightclub', but that's not in the slides.
Another analogy: let us say you are a developer and whenever you like you order computer science books from the market directly - you know the sellers and their prices. In fact your company might have a preferred seller and you contact them directly. All this works fine but may be a new seller is now offering better prices and your company wants to change the 'preferred' seller.
At this point you have to make the following changes - update the contact details (and other stuff) so as to use the new seller. You still place the order directly.
Now consider we introduce a new step in between, there is a 'library' officer in the company and you have to go through him to get the books. While there is a new dependency, you are now immune to any changes to the seller: either the seller changes mode of payment or the seller himself is changed, you now simply put an order to the librarian and he gets the books for you.
From Head First Design Patterns:
Remember, code should be closed (to change) like the lotus flower in the evening, yet open (to extension) like the lotus flower in the morning
A DI-enabled object can be configured by injecting behaviors defined in other classes. The original object structure doesn't have change in order to create many variations. The injection can be made explicit by having a class request other worker-classes in its constructor, or it can be less obvious when using monkeypatching in dynamic languages like Python.
Using an analogy of a Person class, you can take a basic human framework, pass it a set of organs, and watch it evolve. The Person doesn't directly know how the organs work, but their behaviors confirm to an expected interface and influence the owner's physical and mental manifestation.
A magician's sleight of hand! What you may think you see may be secretly manipulated or replaced.
Life is full of dependency injection analogies:
printer - cartridge
digital device - battery
letter - stamp
musician - instrument
bus - driver
sickness - pill
The essence of Inversion of Control (of which Dependency Injection is an implementation) is the separation of the use of an object from the management thereof.
The analogy/example I use is an engine. An engine requires fuel to run, i.e. it is depdendent on fuel. However, the engine cannot be responsible for the fuel it needs. It just 'asks' for fuel, and it is provided (typically by a fuel pump in a car).
The analogy starts breaking down when you look too deep, in that an engine doesn't ask for fuel, it is given it by some kind of management element, like an ECU. One might be able to compare the ECU to a container but I'm not certain how valid this is.
Your project manager asks you to write an app.
You could just write some code based on your career experience so far, but it's unlikely to be what your PM wants.
Better would be if your PM dependency injected you with say a spec for the app. Now your code is going to be related to the spec he gives you.
Better if you were told where the source repository was.
Better if you were told what the tech platform was.
Better if you were told when this needed to be done by.
Etc..
I think a great analogy is a six-year-old with a lego set.
You want your objects to be like the lego bricks. Each one is independent of all the others, and yet offers a clear interface for connecting them to the others. When connecting them together, it doesn't really matter exactly which two bricks you hook together so long as they have a matching interface.
Your dependency injection framework is like the six-year-old. He follows the instructions (i.e., your config file, annotations, etc.) to connect specific bricks together in certain ways to make a particular model.
Of course, since the bricks' interfaces are pretty generalized, they can go together in lots of different ways, so it's easy to come up with new sets of instructions which the six-year-old can use to make a completely different model out of the same bricks.

Resources