What's the difference between incremental software process model, evolutionary model, and the spiral model? - sdlc

Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
I am studying Software Engineering this year and I am little confused about the question in the title.
Both of my professor and the reference ("Software Engineering A Practitioner Approach") differentiates the three titles as different models. However, I can't see obvious difference as their methodologies look the same to me but using different statements to define them.
I feel that practically they all represent the same process model.
Can anybody explain the different models better?

Craig Larman wrote extensively on this topic and I suggest his famous paper Iterative and Incremental Development: A Brief History (PDF) and his book Agile and Iterative Development: A Manager's Guide.
Here is how I would summarize things:
Incremental Development
Incremental Development is a practice where the system functionalities are sliced into increments (small portions). In each increment, a vertical slice of functionality is delivered by going through all the activities of the software development process, from the requirements to the deployment.
Incremental Development (adding) is often used together with Iterative Development (redo) in software development. This is referred to as Iterative and Incremental Development (IID).
Evolutionary method
The terms evolution and evolutionary have been introduced by Tom Gilb in his book Software Metrics published in 1976 where he wrote about EVO, his practice of IID (perhaps the oldest). Evolutionary development focuses on early delivery of high value to stakeholders and on obtaining and utilizing feedback from stakeholders.
In Software Development: Iterative & Evolutionary, Craig Larman puts it like this:
Evolutionary iterative development implies that the requirements, plan, estimates, and solution evolve or are refined over the course of the iterations, rather than fully defined and “frozen” in a major up-front specification effort before the development iterations begin. Evolutionary methods are consistent with the pattern of unpredictable discovery and change in new product development.
And then discusses further evolutionary requirements, evolutionary and adaptive planning, evolutionary delivery. Check the link.
Spiral model
The Spiral Model is another IID approach that has been formalized by Barry Boehm in the
mid-1980s as an extension of the Waterfall to better support iterative development and puts a special emphasis on risk management (through iterative risk analysis).
Quoting Iterative and Incremental Development: A Brief History:
A 1985 landmark in IID publications
was Barry Boehm’s “A Spiral Model of
Software Development and Enhancement”
(although the more frequent citation
date is 1986). The spiral model was
arguably not the first case in which a
team prioritized development cycles by
risk: Gilb and IBM FSD had previously
applied or advocated variations of
this idea, for example. However, the
spiral model did formalize and make
prominent the risk-driven-iterations
concept and the need to use a discrete
step of risk assessment in each
iteration.
What now?
Agile Methods are a subset of IID and evolutionary methods and are preferred nowadays.
References
Iterative and Incremental Development: A Brief History - Craig Larman, Victor R. Basili (June 2003)
Software Development: Iterative & Evolutionary - Craig Larman
Incremental versus iterative development - Alistair Cockburn
Iterative and incremental development
Software development process
T. Gilb, Software Metrics, Little, Brown, and Co., 1976 (out of print).
B. Boehm, “A Spiral Model of Software Development and Enhancement,” Proc. Int’l Workshop Software Process and Software Environments, ACM Press, 1985; also in ACM Software Eng. Notes, Aug. 1986, pp. 22-42.

These concepts are usually poorly explained.
Incremental is a property of the work products (documents, models, source code, etc.), and it means that they are created little by little rather than in a single go. For example, you create a first version of your class model during requirements analysis, then augment it after UI modelling, and then you even extend it more during detailed design.
Evolutionary is a property of deliverables, i.e. work products that are delivered to the users, and in this regard it is a particular kind of "incremental". It means that whatever is delivered it is delivered as early as possible in a initial form, not fully functional, and then re-delivered every so often, each time with more and more functionality. This often implies an iterative lifecycle.
[An iterative lifecycle, but the way, refers to the tasks that you carry out (as opposed to "incremental", which refers to the products; this is the view adopted by SEMAT), and it means that you perform tasks of the same type over and over. For example, in an iterative lifecycle you would find yourself doing design, then coding, then unit testing, then release, and then again the same things, over and over. Please note that iterative and incremental do not imply each other; any combination of both is possible.]
The spiral model for lifecycles is a model proposed by Barry Boehm that combines aspects of waterfall with innovative advances such as an iterative approach and built-in quality control.
For the concepts of "work product", "task", "lifecycle", etc. please see ISO/IEC 24744.
Hope this helps.

This is the ipsis litteris definition from ISO 24748-1:2016 (Systems and Software Engineering Life Cycle Management):
There are many different development strategies that can be applied to system and software projects. Three of these strategies are summarized below:
a) Once-through. The “once-through” strategy, also called “waterfall,” consists of performing the development process a single time. Simplistically: determine user needs, define requirements, design the system, implement the system, test, fix and deliver.
b) Incremental. The “incremental” strategy determines user needs and defines the system requirements, then performs the rest of the development in a sequence of builds. The first build incorporates part of the planned capabilities, the next build adds more capabilities, and so on, until the system is complete.
c) Evolutionary. The “evolutionary” strategy also develops a system in builds but differs from the incremental strategy in acknowledging that the user need is not fully understood and all requirements cannot be defined up front. In this strategy, user needs and system requirements are partially defined up front, and then are refined in each succeeding build.
Hope this helps. Tati

Related

What is Function Point Analysis?

What does function Point Analysis Mean?
is it that its used for cost estimation of a software? or are there any proper definition that would define function Point Analysis?
Can you please give me a short description on it.
While I agree with Leo's answer, I'll try a more practical description:
What it is
Function Point Analysis (FPA) is one of currently five standards for Functional Sizing (see ISO/IEC 14143) as approved by ISO. FPA is actually the widely used short term for the ISO/IEC 20926 standard titled "IFPUG Functional Size Measurement".
FPA is a means to rate (the term 'measure' is actually misleading) the amount of functional requirements to software. To achieve this rating, a technique is used that was known as 'functional decomposition' in earlier times. This concept is in fact very close to describing requirements with 'use cases', even though the detailed rules and notations are quite different.
In short, the functional requirements are decomposed into 'elementary functions', which then are rated each with a point value. The total of points for all elementary functions is used as an indication of the 'size' or amount of requirements. This is called the 'functional size' expressed in the unit of 'function points' (fp).
The natural representation of a functional decomposition is the functional tree.
The FPA standard also has a set of rules for rating changes to existing applications, thus it can be used to rate the functional requirements for the adaption of extension of existing systems ('enhancements' or 'releases').
What it is not
FPA is not an effort estimation technique by itself. Obviously, the relation between the size of functional requirements and the implementation effort can be and often is rather loose. Function points can be used as (one) input to more complex estimation models (such as COCOMO), which have to take into account all other effort drivers.
FPA is not a 'software metric' - functional size is always related to the user requirements fulfilled by software. While you can count and measure lines of code or code complexity, functional size is the result of an analytical process.
When to use it
FPA can be helpful to estimate the effort for a software project in an early stage, when the requirements are known, but the details of implementation have not yet been specified or evaluated. The functional requirements are reflected in the functional size, the non-functional requs need to be input in an estimation model. You need to have/use a good and proven (and trusted) model, otherwise the functional size is useless for this purpose.
FPA can also help to rate the 'value' of an application in the sense of 'recovery costs'.
Eventually, in the context of IT client/vendor relationships, FPA can be used as a basis for pricing. Clients are invoiced based on an agreed 'price per fp' instead of an hourly rate.
When not to use it
By definition, FPA requires a basic understanding of the functional requirements. Thus, if you do not have or know the functional requirements, it will be difficult if not impossible to use FPA.
FPA is also not suited to rate the performance of individuals, as it is a rather holistic rating for an application and cannot to be used to size only parts of it.
the authoritative answer, from IFPUG
http://www.ifpug.org/about-ifpug/about-function-point-analysis/
Function Point Analysis (FPA) is a sizing measure of clear business
significance. First made public by Allan Albrecht of IBM in 1979, the
FPA technique quantifies the functions contained within software in
terms that are meaningful to the software users. The measure relates
directly to the business requirements that the software is intended to
address. It can therefore be readily applied across a wide range of
development environments and throughout the life of a development
project, from early requirements definition to full operational use.
Other business measures, such as the productivity of the development
process and the cost per unit to support the software, can also be
readily derived.The function point measure itself is derived in a
number of stages. Using a standardized set of basic criteria, each of
the business functions is a numeric index according to its type and
complexity. These indices are totaled to give an initial measure of
size which is then normalized by incorporating a number of factors
relating to the software as a whole. The end result is a single number
called the Function Point index which measures the size and complexity
of the software product.
In summary, the function point technique provides an objective,
comparative measure that assists in the evaluation, planning,
management and control of software production.
ps. the IFPUG definition is what is taken as certain in the Court here in Brazil, when someone has any kind of dispute about function points (mostly because Government contracts are usually defined in FPs)

How to extract features from fmri?

I'm having fmri dataset for the classification of Normal Controls and Alzheimer diseased patients. Now, as a newbie I'm unable to extract features from my dataset. I want to extract activation patterns, GM,WM, CSF, volumetric measures and hemo-dynamics in numerical form. Please guide me how and where to start from and please suggest some easy and efficient softwares for my work... I'll be obliged...
Take a look at the software packages called FSL (FMRIB Software Library) and SPM (Statistical Parametric Mapping).
Each of them can do the kind of analyses you're asking about. However, be warned that none of these analyses are trivial. You should probably read up a bit on the subject, first. The Handbook of Functional MRI Data Analysis is a great place to start for beginners.
Like #WeirdAlchemy says, these are many analyses you want to carry out, and all of them non-trivial. You typically learn to these over weeks at a relevant intensive course or months during a neuro Masters programme. To answer your question very explicitly:
GM, WM & CSF volumetric measures - You can do this with FSL SIENA, SPM VBM, AFNI 3Dclust, among others.
"Extract activation patterns" is too vague. In all probability, you likely have task-related BOLD fMRI data and want to perform a general linear model (GLM) analysis. FSL FEAT, SPM fMRI, AFNI and others support this. However, without knowing the experimental design, the nature of the data, and what you want to learn from it, it's hard to be more specific about which tool is appropriate.
"Haemodynamics in numerical form" This can mean a number of things, but if you are thinking about the amount of haemodynamic signal modulation (e.g. Condition led to a 2% change in BOLD signal), you get that out of the GLM analysis mentioned above.

Machine Learning & Big Data [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
In the beginning, I would like to describe my current position and the goal that I would like to achieve.
I am a researcher dealing with machine learning. So far have gone through several theoretical courses covering machine learning algorithms and social network analysis and therefore have gained some theoretical concepts useful for implementing machine learning algorithms and feed in the real data.
On simple examples, the algorithms work well and the running time is acceptable whereas the big data represent a problem if trying to run algorithms on my PC. Regarding the software I have enough experience to implement whatever algorithm from articles or design my own using whatever language or IDE (so far have used Matlab, Java with Eclipse, .NET...) but so far haven't got much experience with setting-up infrastructure. I have started to learn about Hadoop, NoSQL databases, etc, but I am not sure what strategy would be the best taking into consideration the learning time constraints.
The final goal is to be able to set-up a working platform for analyzing big data with focusing on implementing my own machine learning algorithms and put all together into production, ready for solving useful question by processing big data.
As the main focus is on implementing machine learning algorithms I would like to ask whether there is any existing running platform, offering enough CPU resources to feed in large data, upload own algorithms and simply process the data without thinking about distributed processing.
Nevertheless, such a platform exists or not, I would like to gain a picture big enough to be able to work in a team that could put into production the whole system tailored upon the specific customer demands. For example, a retailer would like to analyze daily purchases so all the daily records have to be uploaded to some infrastructure, capable enough to process the data by using custom machine learning algorithms.
To put all the above into simple question: How to design a custom data mining solution for real-life problems with main focus on machine learning algorithms and put it into production, if possible, by using the existing infrastructure and if not, design distributed system (by using Hadoop or whatever framework).
I would be very thankful for any advice or suggestions about books or other helpful resources.
First of all, your question needs to define more clearly what you intend by Big Data.
Indeed, Big Data is a buzzword that may refer to various size of problems. I tend to define Big Data as the category of problems where the Data size or the Computation time is big enough for "the hardware abstractions to become broken", which means that a single commodity machine cannot perform the computations without intensive care of computations and memory.
The scale threshold beyond which data become Big Data is therefore unclear and is sensitive to your implementation. Is your algorithm bounded by Hard-Drive bandwidth ? Does it have to feet into memory ? Did you try to avoid unnecessary quadratic costs ? Did you make any effort to improve cache efficiency, etc.
From several years of experience in running medium large-scale machine learning challenge (on up to 250 hundreds commodity machine), I strongly believe that many problems that seem to require distributed infrastructure can actually be run on a single commodity machine if the problem is expressed correctly. For example, you are mentioning large scale data for retailers. I have been working on this exact subject for several years, and I often managed to make all the computations run on a single machine, provided a bit of optimisation. My company has been working on simple custom data format that allows one year of all the data from a very large retailer to be stored within 50GB, which means a single commodity hard-drive could hold 20 years of history. You can have a look for example at : https://github.com/Lokad/lokad-receiptstream
From my experience, it is worth spending time in trying to optimize algorithm and memory so that you could avoid to resort to distributed architecture. Indeed, distributed architectures come with a triple cost. First of all, the strong knowledge requirements. Secondly, it comes with a large complexity overhead in the code. Finally, distributed architectures come with a significant latency overhead (with the exception of local multi-threaded distribution).
From a practitioner point of view, being able to perform a given data mining or machine learning algorithm in 30 seconds is one the key factor to efficiency. I have noticed than when some computations, whether sequential or distributed, take 10 minutes, my focus and efficiency tend to drop quickly as it becomes much more complicated to iterate quickly and quickly test new ideas. The latency overhead introduced by many of the distributed frameworks is such that you will inevitably be in this low-efficiency scenario.
If the scale of the problem is such that even with strong effort you cannot perform it on a single machine, then I strongly suggest to resort to on-shelf distributed frameworks instead of building your own. One of the most well known framework is the MapReduce abstraction, available through Apache Hadoop. Hadoop can be run on 10 thousands nodes cluster, probably much more than you will ever need. If you do not own the hardware, you can "rent" the use of a Hadoop cluster, for example through Amazon MapReduce.
Unfortunately, the MapReduce abstraction is not suited to all Machine Learning computations.
As far as Machine Learning is concerned, MapReduce is a rigid framework and numerous cases have proved to be difficult or inefficient to adapt to this framework:
– The MapReduce framework is in itself related to functional programming. The
Map procedure is applied to each data chunk independently. Therefore, the
MapReduce framework is not suited to algorithms where the application of the
Map procedure to some data chunks need the results of the same procedure to
other data chunks as a prerequisite. In other words, the MapReduce framework
is not suited when the computations between the different pieces of data are
not independent and impose a specific chronology.
– MapReduce is designed to provide a single execution of the map and of the
reduce steps and does not directly provide iterative calls. It is therefore not
directly suited for the numerous machine-learning problems implying iterative
processing (Expectation-Maximisation (EM), Belief Propagation, etc.). The
implementation of these algorithms in a MapReduce framework means the
user has to engineer a solution that organizes results retrieval and scheduling
of the multiple iterations so that each map iteration is launched after the reduce
phase of the previous iteration is completed and so each map iteration is fed
with results provided by the reduce phase of the previous iteration.
– Most MapReduce implementations have been designed to address production needs and
robustness. As a result, the primary concern of the framework is to handle
hardware failures and to guarantee the computation results. The MapReduce efficiency
is therefore partly lowered by these reliability constraints. For example, the
serialization on hard-disks of computation results turns out to be rather costly
in some cases.
– MapReduce is not suited to asynchronous algorithms.
The questioning of the MapReduce framework has led to richer distributed frameworks where more control and freedom are left to the framework user, at the price of more complexity for this user. Among these frameworks, GraphLab and Dryad (both based on Direct Acyclic Graphs of computations) are well-known.
As a consequence, there is no "One size fits all" framework, such as there is no "One size fits all" data storage solution.
To start with Hadoop, you can have a look at the book Hadoop: The Definitive Guide by Tom White
If you are interested in how large-scale frameworks fit into Machine Learning requirements, you may be interested by the second chapter (in English) of my PhD, available here: http://tel.archives-ouvertes.fr/docs/00/74/47/68/ANNEX/texfiles/PhD%20Main/PhD.pdf
If you provide more insight about the specific challenge you want to deal with (type of algorithm, size of the data, time and money constraints, etc.), we probably could provide you a more specific answer.
edit : another reference that could prove to be of interest : Scaling-up Machine Learning
I had to implement a couple of Data Mining algorithms to work with BigData too, and I ended up using Hadoop.
I don't know if you are familiar to Mahout (http://mahout.apache.org/), which already has several algorithms ready to use with Hadoop.
Nevertheless, if you want to implement your own Algorithm, you can still adapt it to Hadoop's MapReduce paradigm and get good results. This is an excellent book on how to adapt Artificial Intelligence algorithms to MapReduce:
Mining of Massive Datasets - http://infolab.stanford.edu/~ullman/mmds.html
This seems to be an old question. However given your usecase, the main frameworks focusing on Machine Learning in Big Data domain are Mahout, Spark (MLlib), H2O etc. However to run Machine Learning algorithms on Big Data you have to convert them to parallel programs based on Map Reduce paradigm. This is a nice article giving a brief introduction to major (not all) big Data frameworks:
http://www.codophile.com/big-data-frameworks-every-programmer-should-know/
I hope this will help.

Machine learning/information retrieval project

I’m reading towards M.Sc. in Computer Science and just completed first year of the source. (This is a two year course). Soon I have to submit a proposal for the M.Sc. Project. I have selected following topic.
“Suitability of machine learning for document ranking in information retrieval system”. Researchers have been using various machine learning algorithms for ranking documents. So as the first phase of the project I will be doing a complete literature survey and finding out advantages/disadvantages of current approaches. In the second phase of the project I will be proposing a new (modified) algorithm in order to overcome the limitations of current approaches.
Actually my question is whether this type of project is suitable as a M.Sc. project? Moreover if somebody has some interesting idea in information retrieval filed, is it possible to share those ideas with me.
Thanks
Ranking is always the hardest part of any of Information Retrieval systems. I think it is a very good topic but you have to take care to -- as soon as possible -- to define a scope of the work. Probably you will not be able to develop a new IR engine but rather build a prototype based on, e.g., apache lucene.
Currently there is a lot of dataset including stackoverflow data dump, which provide you all information you need to define a rich feature vector (number of points, time, you can mine topics of previous question etc., popularity of a tag) for you machine learning ranking algorithm. In this part of the work you could, e.g., classify types of features (e.g., user specific, semantic feature - software name in the title) and perform series of experiments to learn which features are most important and which are not for a given dataset.
The second direction of such a project can be how to perform learning efficiently. The reason behind is the quantity of data within web or community forums and changes in the forum (this would be important if you take a community specific features), e.g., changes in technologies, new software release, etc.
There are many other topics related to search and machine learning. The best idea is to search on scholar.google.com for the recent survey papers on ranking, machine learning, and search to learn what is the state-of-the-art. The very next step would be to talk with your MSc supervisor.
Good luck!
Everything you said is good and should be done, but you forgot the most important part:
Prove that your algorithm is better and/or faster than other algorithms, with good experiments and maybe some statistics (p-value, confidence interval).
If you do that and convince people that your algorithm is useful you surely will not fail :)

Intelligent code-completion? Is there AI to write code by learning?

I am asking this question because I know there are a lot of well-read CS types on here who can give a clear answer.
I am wondering if such an AI exists (or is being researched/developed) that it writes programs by generating and compiling code all on it's own and then progresses by learning from former iterations. I am talking about working to make us, programmers, obsolete. I'm imagining something that learns what works and what doesn't in a programming languages by trial and error.
I know this sounds pie-in-the-sky so I'm asking to find out what's been done, if anything.
Of course even a human programmer needs inputs and specifications, so such an experiment has to have carefully defined parameters. Like if the AI was going to explore different timing functions, that aspect has to be clearly defined.
But with a sophisticated learning AI I'd be curious to see what it might generate.
I know there are a lot of human qualities computers can't replicate like our judgement, tastes and prejudices. But my imagination likes the idea of a program that spits out a web site after a day of thinking and lets me see what it came up with, and even still I would often expect it to be garbage; but maybe once a day I maybe give it feedback and help it learn.
Another avenue of this thought is it would be nice to give a high-level description like "menued website" or "image tools" and it generates code with enough depth that would be useful as a code completion module for me to then code in the details. But I suppose that could be envisioned as a non-intelligent static hierarchical code completion scheme.
How about it?
Such tools exist. They are the subject of a discipline called Genetic Programming. How you evaluate their success depends on the scope of their application.
They have been extremely successful (orders of magnitude more efficient than humans) to design optimal programs for the management of industrial process, automated medical diagnosis, or integrated circuit design. Those processes are well constrained, with an explicit and immutable success measure, and a great amount of "universe knowledge", that is a large set of rules on what is a valid, working, program and what is not.
They have been totally useless in trying to build mainstream programs, that require user interaction, because the main item a system that learns needs is an explicit "fitness function", or evaluation of the quality of the current solution it has come up with.
Another domain that can be seen in dealing with "program learning" is Inductive Logic Programming, although it is more used to provide automatic demonstration or language / taxonomy learning.
Disclaimer: I am not a native English speaker nor an expert in the field, I am an amateur - expect imprecisions and/or errors in what follow. So, in the spirit of stackoverflow, don't be afraid to correct and improve my prose and/or my content. Note also that this is not a complete survey of automatic programming techniques (code generation (CG) from Model-Driven Architectures (MDAs) merits at least a passing mention).
I want to add more to what Varkhan answered (which is essentially correct).
The Genetic Programming (GP) approach to Automatic Programming conflates, with its fitness functions, two different problems ("self-compilation" is conceptually a no-brainer):
self-improvement/adaptation - of the synthesized program and, if so desired, of the synthesizer itself; and
program synthesis.
w.r.t. self-improvement/adaptation refer to Jürgen Schmidhuber's Goedel machines: self-referential universal problem solvers making provably optimal self-improvements. (As a side note: interesting is his work on artificial curiosity.) Also relevant for this discussion are Autonomic Systems.
w.r.t. program synthesis, I think is possible to classify 3 main branches: stochastic (probabilistic - like above mentioned GP), inductive and deductive.
GP is essentially stochastic because it produces the space of likely programs with heuristics such as crossover, random mutation, gene duplication, gene deletion, etc... (than it tests programs with the fitness function and let the fittest survive and reproduce).
Inductive program synthesis is usually known as Inductive Programming (IP), of which Inductive Logic Programming (ILP) is a sub-field. That is, in general the technique is not limited to logic program synthesis or to synthesizers written in a logic programming language (nor both are limited to "..automatic demonstration or language/taxonomy learning").
IP is often deterministic (but there are exceptions): starts from an incomplete specification (such as example input/output pairs) and use that to constraint the search space of likely programs satisfying such specification and then to test it (generate-and-test approach) or to directly synthesize a program detecting recurrences in the given examples, which are then generalized (data-driven or analytical approach). The process as a whole is essentially statistical induction/inference - i.e. considering what to include into the incomplete specification is akin to random sampling.
Generate-and-test and data-driven/analytical§ approaches can be quite fast, so both are promising (even if only little synthesized programs are demonstrated in public until now), but generate-and-test (like GP) is embarrassingly parallel and then notable improvements (scaling to realistic program sizes) can be expected. But note that Incremental Inductive Programming (IIP)§, which is inherently sequential, has demonstrated to be orders of magnitude more effective of non-incremental approaches.
§ These links are directly to PDF files: sorry, I am unable to find an abstract.
Programming by Demonstration (PbD) and Programming by Example (PbE) are end-user development techniques known to leverage inductive program synthesis practically.
Deductive program synthesis start with a (presumed) complete (formal) specification (logic conditions) instead. One of the techniques leverage automated theorem provers: to synthesize a program, it constructs a proof of the existence of an object meeting the specification; hence, via Curry-Howard-de Bruijn isomorphism (proofs-as-programs correspondence and formulae-as-types correspondence), it extracts a program from the proof. Other variants include the use of constraint solving and deductive composition of subroutine libraries.
In my opinion inductive and deductive synthesis in practice are attacking the same problem by two somewhat different angles, because what constitute a complete specification is debatable (besides, a complete specification today can become incomplete tomorrow - the world is not static).
When (if) these techniques (self-improvement/adaptation and program synthesis) will mature, they promise to rise the amount of automation provided by declarative programming (that such setting is to be considered "programming" is sometimes debated): we will concentrate more on Domain Engineering and Requirements Analysis and Engineering than on software manual design and development, manual debugging, manual system performance tuning and so on (possibly with less accidental complexity compared to that introduced with current manual, not self-improving/adapting techniques). This will also promote a level of agility yet to be demonstrated by current techniques.

Resources