In SDLC, What are the things covered by "Design"? - sdlc

Is it the flow diagram, User Interface or what?

In the design phase of the SDLC, making solution is focused via system modeling. Uml modeling is performed which is called the Unified modeling language. UML modeling has the following steps
Class diagram
Use case diagram
Collaboration diagram
Sequence diagram
The UML modeling is basically for defining the problem or we can say it is a tool to solve the problem.
More : http://websolace.net/web-design-solutions/sdlc-maintains-the-quality/

From wikipedia:
In systems, design functions and operations are described in detail, including screen layouts, business rules, process diagrams and other documentation. The output of this stage will describe the new system as a collection of modules or subsystems.
The design stage takes as its initial input the requirements identified in the approved requirements document. For each requirement, a set of one or more design elements will be produced as a result of interviews, workshops, and/or prototype efforts. Design elements describe the desired software features in detail, and generally include functional hierarchy diagrams, screen layout diagrams, tables of business rules, business process diagrams, pseudocode, and a complete entity-relationship diagram with a full data dictionary. These design elements are intended to describe the software in sufficient detail that skilled programmers may develop the software with minimal additional input.
http://en.wikipedia.org/wiki/Systems_Development_Life_Cycle#Design

In SDLC (Software Development Life Cycle), given below things covered in Designing phase:
Partition of requirements into hardware & software system.
Designing system architecture.
Creating UML diagrams (class diagram, Use cases, sequence diagrams
and Activity diagram)

Related

TML(Tractable Markov Logic) is a wonderful model! Why I haven't seen it being used for a wide of application scenarios of artificial intelligence?

I have been reading papers about the Markov model, suddenly a great extension like TML(Tractable Markov Logic) coming out.
It is a subset of Markov logic, and uses probabilistic class and part hierarchies to control complexity.
This model has both complex logical structure and uncertainty.
It can represent objects, classes, and relations between objects, subject to certain restrictions which ensure that inference in any model built in TML can be queried efficiently.
I am just wondering why such a good idea not widely spreading around the area of application scenarios like activity analysis?
More info
My understanding is that TML is polynomial in the size of the model, but the size of the model needs to be compiled to a given problem and may become exponentially large. So, at the end, it's still not really tractable.
However, it may be advantageous to use it in the case that the compiled form will be used multiple times, because then the compilation is done only once for multiple queries. Also, once you obtain the compiled form, you know what to expect in terms of run-time.
However, I think the main reason you don't see TML being used more broadly is that it is just an academic idea. There is no robust, general-purpose system based on it. If you try to work on a real problem with it, you will probably find out that it lacks certain practical features. For example, there is no way to represent a normal distribution with it, and lots of problems involve normal distributions. In such cases, one may still use the ideas behind the TML paper but would have to create their own implementation that includes further features needed for the problem at hand. This is a general problem that applies to lots and lots of academic ideas. Only a few become really useful and the basis of practical systems. Most of them exert influence at the level of ideas only.

Formal Concept Analysis data-sets

I am currently completing a postgrad degree in Information Systems Management and have been given a thesis topic that relates to Formal Concept Analysis.
The objective is compare open-source software that is able to read data and represent this in a lattice diagram (an example application is that of Concept Explorer). Additionally, the performance of these tools need to be compared with one another, with varying data-set sizes etc.
The main problem I'm experiencing is finding data sets that are compliant with these tools that are big enough to test the limits of each application, as well as figuring out how to accurately measure things such as CPU time taken to draw the lattice diagram and other similar measures. Data for formal contexts generally follow a binary relationship, such as a cross table that shows how attributes and objects can be related.
As such, my question is where would I find such data and how would I be able to manipulate that data to be usable with software like Concept Explorer.
P.S. I am new here, so not sure if this is posted in the right place!

Feature Selection: Hybrid vs Embedded approaches

I have been doing research on feature selection and I'm failing to understand the difference about these two approaches.
According to most authors on literature, feature selection algorithms are categorized into three categories. The first two, filter and wrapper are easy to understand and there is a general agreement on that. However, on the last category there seems to be a misunderstandment. Some authors as the case of H. Liu name the last category as hybrid. In contrast, V. Kumar names it embedded. In addiction to that there are cases where authors define 4 categories including both embedded and hybrid algorithms, as is the case of P. Abinaya.
Authors explain the hybrid algorithms as the combination between a filter algorithm and a wrapper approachs. The main idea behind these algorithms is to use a filter approach to reduce the search space for a wrapper approach.
On the other hand the definition of embedded algorithms on the literature is very different depending on the source. Some use almost the same definitation as the hybrid algorithms as is the case of the wikipedia page. Others give more abstract definitions such as: methods that perform feature selection during learning of optimal parameters, and methods that incorporate knowledge about the specific structure of the class of functions used by a certain learning machine.
So I would appreciate if anyone could explain me what's the difference between these two approaches or give a less abstract definition of embedded methods.
Thanks.

What's the difference between incremental software process model, evolutionary model, and the spiral model?

Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
I am studying Software Engineering this year and I am little confused about the question in the title.
Both of my professor and the reference ("Software Engineering A Practitioner Approach") differentiates the three titles as different models. However, I can't see obvious difference as their methodologies look the same to me but using different statements to define them.
I feel that practically they all represent the same process model.
Can anybody explain the different models better?
Craig Larman wrote extensively on this topic and I suggest his famous paper Iterative and Incremental Development: A Brief History (PDF) and his book Agile and Iterative Development: A Manager's Guide.
Here is how I would summarize things:
Incremental Development
Incremental Development is a practice where the system functionalities are sliced into increments (small portions). In each increment, a vertical slice of functionality is delivered by going through all the activities of the software development process, from the requirements to the deployment.
Incremental Development (adding) is often used together with Iterative Development (redo) in software development. This is referred to as Iterative and Incremental Development (IID).
Evolutionary method
The terms evolution and evolutionary have been introduced by Tom Gilb in his book Software Metrics published in 1976 where he wrote about EVO, his practice of IID (perhaps the oldest). Evolutionary development focuses on early delivery of high value to stakeholders and on obtaining and utilizing feedback from stakeholders.
In Software Development: Iterative & Evolutionary, Craig Larman puts it like this:
Evolutionary iterative development implies that the requirements, plan, estimates, and solution evolve or are refined over the course of the iterations, rather than fully defined and “frozen” in a major up-front specification effort before the development iterations begin. Evolutionary methods are consistent with the pattern of unpredictable discovery and change in new product development.
And then discusses further evolutionary requirements, evolutionary and adaptive planning, evolutionary delivery. Check the link.
Spiral model
The Spiral Model is another IID approach that has been formalized by Barry Boehm in the
mid-1980s as an extension of the Waterfall to better support iterative development and puts a special emphasis on risk management (through iterative risk analysis).
Quoting Iterative and Incremental Development: A Brief History:
A 1985 landmark in IID publications
was Barry Boehm’s “A Spiral Model of
Software Development and Enhancement”
(although the more frequent citation
date is 1986). The spiral model was
arguably not the first case in which a
team prioritized development cycles by
risk: Gilb and IBM FSD had previously
applied or advocated variations of
this idea, for example. However, the
spiral model did formalize and make
prominent the risk-driven-iterations
concept and the need to use a discrete
step of risk assessment in each
iteration.
What now?
Agile Methods are a subset of IID and evolutionary methods and are preferred nowadays.
References
Iterative and Incremental Development: A Brief History - Craig Larman, Victor R. Basili (June 2003)
Software Development: Iterative & Evolutionary - Craig Larman
Incremental versus iterative development - Alistair Cockburn
Iterative and incremental development
Software development process
T. Gilb, Software Metrics, Little, Brown, and Co., 1976 (out of print).
B. Boehm, “A Spiral Model of Software Development and Enhancement,” Proc. Int’l Workshop Software Process and Software Environments, ACM Press, 1985; also in ACM Software Eng. Notes, Aug. 1986, pp. 22-42.
These concepts are usually poorly explained.
Incremental is a property of the work products (documents, models, source code, etc.), and it means that they are created little by little rather than in a single go. For example, you create a first version of your class model during requirements analysis, then augment it after UI modelling, and then you even extend it more during detailed design.
Evolutionary is a property of deliverables, i.e. work products that are delivered to the users, and in this regard it is a particular kind of "incremental". It means that whatever is delivered it is delivered as early as possible in a initial form, not fully functional, and then re-delivered every so often, each time with more and more functionality. This often implies an iterative lifecycle.
[An iterative lifecycle, but the way, refers to the tasks that you carry out (as opposed to "incremental", which refers to the products; this is the view adopted by SEMAT), and it means that you perform tasks of the same type over and over. For example, in an iterative lifecycle you would find yourself doing design, then coding, then unit testing, then release, and then again the same things, over and over. Please note that iterative and incremental do not imply each other; any combination of both is possible.]
The spiral model for lifecycles is a model proposed by Barry Boehm that combines aspects of waterfall with innovative advances such as an iterative approach and built-in quality control.
For the concepts of "work product", "task", "lifecycle", etc. please see ISO/IEC 24744.
Hope this helps.
This is the ipsis litteris definition from ISO 24748-1:2016 (Systems and Software Engineering Life Cycle Management):
There are many different development strategies that can be applied to system and software projects. Three of these strategies are summarized below:
a) Once-through. The “once-through” strategy, also called “waterfall,” consists of performing the development process a single time. Simplistically: determine user needs, define requirements, design the system, implement the system, test, fix and deliver.
b) Incremental. The “incremental” strategy determines user needs and defines the system requirements, then performs the rest of the development in a sequence of builds. The first build incorporates part of the planned capabilities, the next build adds more capabilities, and so on, until the system is complete.
c) Evolutionary. The “evolutionary” strategy also develops a system in builds but differs from the incremental strategy in acknowledging that the user need is not fully understood and all requirements cannot be defined up front. In this strategy, user needs and system requirements are partially defined up front, and then are refined in each succeeding build.
Hope this helps. Tati

Intelligent code-completion? Is there AI to write code by learning?

I am asking this question because I know there are a lot of well-read CS types on here who can give a clear answer.
I am wondering if such an AI exists (or is being researched/developed) that it writes programs by generating and compiling code all on it's own and then progresses by learning from former iterations. I am talking about working to make us, programmers, obsolete. I'm imagining something that learns what works and what doesn't in a programming languages by trial and error.
I know this sounds pie-in-the-sky so I'm asking to find out what's been done, if anything.
Of course even a human programmer needs inputs and specifications, so such an experiment has to have carefully defined parameters. Like if the AI was going to explore different timing functions, that aspect has to be clearly defined.
But with a sophisticated learning AI I'd be curious to see what it might generate.
I know there are a lot of human qualities computers can't replicate like our judgement, tastes and prejudices. But my imagination likes the idea of a program that spits out a web site after a day of thinking and lets me see what it came up with, and even still I would often expect it to be garbage; but maybe once a day I maybe give it feedback and help it learn.
Another avenue of this thought is it would be nice to give a high-level description like "menued website" or "image tools" and it generates code with enough depth that would be useful as a code completion module for me to then code in the details. But I suppose that could be envisioned as a non-intelligent static hierarchical code completion scheme.
How about it?
Such tools exist. They are the subject of a discipline called Genetic Programming. How you evaluate their success depends on the scope of their application.
They have been extremely successful (orders of magnitude more efficient than humans) to design optimal programs for the management of industrial process, automated medical diagnosis, or integrated circuit design. Those processes are well constrained, with an explicit and immutable success measure, and a great amount of "universe knowledge", that is a large set of rules on what is a valid, working, program and what is not.
They have been totally useless in trying to build mainstream programs, that require user interaction, because the main item a system that learns needs is an explicit "fitness function", or evaluation of the quality of the current solution it has come up with.
Another domain that can be seen in dealing with "program learning" is Inductive Logic Programming, although it is more used to provide automatic demonstration or language / taxonomy learning.
Disclaimer: I am not a native English speaker nor an expert in the field, I am an amateur - expect imprecisions and/or errors in what follow. So, in the spirit of stackoverflow, don't be afraid to correct and improve my prose and/or my content. Note also that this is not a complete survey of automatic programming techniques (code generation (CG) from Model-Driven Architectures (MDAs) merits at least a passing mention).
I want to add more to what Varkhan answered (which is essentially correct).
The Genetic Programming (GP) approach to Automatic Programming conflates, with its fitness functions, two different problems ("self-compilation" is conceptually a no-brainer):
self-improvement/adaptation - of the synthesized program and, if so desired, of the synthesizer itself; and
program synthesis.
w.r.t. self-improvement/adaptation refer to Jürgen Schmidhuber's Goedel machines: self-referential universal problem solvers making provably optimal self-improvements. (As a side note: interesting is his work on artificial curiosity.) Also relevant for this discussion are Autonomic Systems.
w.r.t. program synthesis, I think is possible to classify 3 main branches: stochastic (probabilistic - like above mentioned GP), inductive and deductive.
GP is essentially stochastic because it produces the space of likely programs with heuristics such as crossover, random mutation, gene duplication, gene deletion, etc... (than it tests programs with the fitness function and let the fittest survive and reproduce).
Inductive program synthesis is usually known as Inductive Programming (IP), of which Inductive Logic Programming (ILP) is a sub-field. That is, in general the technique is not limited to logic program synthesis or to synthesizers written in a logic programming language (nor both are limited to "..automatic demonstration or language/taxonomy learning").
IP is often deterministic (but there are exceptions): starts from an incomplete specification (such as example input/output pairs) and use that to constraint the search space of likely programs satisfying such specification and then to test it (generate-and-test approach) or to directly synthesize a program detecting recurrences in the given examples, which are then generalized (data-driven or analytical approach). The process as a whole is essentially statistical induction/inference - i.e. considering what to include into the incomplete specification is akin to random sampling.
Generate-and-test and data-driven/analytical§ approaches can be quite fast, so both are promising (even if only little synthesized programs are demonstrated in public until now), but generate-and-test (like GP) is embarrassingly parallel and then notable improvements (scaling to realistic program sizes) can be expected. But note that Incremental Inductive Programming (IIP)§, which is inherently sequential, has demonstrated to be orders of magnitude more effective of non-incremental approaches.
§ These links are directly to PDF files: sorry, I am unable to find an abstract.
Programming by Demonstration (PbD) and Programming by Example (PbE) are end-user development techniques known to leverage inductive program synthesis practically.
Deductive program synthesis start with a (presumed) complete (formal) specification (logic conditions) instead. One of the techniques leverage automated theorem provers: to synthesize a program, it constructs a proof of the existence of an object meeting the specification; hence, via Curry-Howard-de Bruijn isomorphism (proofs-as-programs correspondence and formulae-as-types correspondence), it extracts a program from the proof. Other variants include the use of constraint solving and deductive composition of subroutine libraries.
In my opinion inductive and deductive synthesis in practice are attacking the same problem by two somewhat different angles, because what constitute a complete specification is debatable (besides, a complete specification today can become incomplete tomorrow - the world is not static).
When (if) these techniques (self-improvement/adaptation and program synthesis) will mature, they promise to rise the amount of automation provided by declarative programming (that such setting is to be considered "programming" is sometimes debated): we will concentrate more on Domain Engineering and Requirements Analysis and Engineering than on software manual design and development, manual debugging, manual system performance tuning and so on (possibly with less accidental complexity compared to that introduced with current manual, not self-improving/adapting techniques). This will also promote a level of agility yet to be demonstrated by current techniques.

Resources