Has there been any agent-based simulation models built for investigating software engineering processes or problems? (e.g. collaboration in agile vs. traditional warefall, QA defect trends, open source project growth...etc)
In case you didn't find anything, there's some academic work that's related -- for example
this by Madey, Freeh and Tynan which discusses OSS development as social network, and this by the same authors who then go on to model it using Swarm
Related
In a Devops context, Who is the responsible for the automation tasks ?
more exactly in the case of "pipeline as a code" in jenkins . who is supposed to do this task ? the devoloper or the operator ?
who is the actor ?
"The key to DevOps is greater collaboration between engineering and operations."
Roles : DEVOPS
Responsibilities :
1. Management : The DevOps Engineer ensures compliance to standards by monitoring the enterprise software and online websites. The engineer also regulates tools and processes in the engineering department and catalyses their simultaneous enhancement and evolution.
2 Design and Development : Design and Development of enterprise infrastructure and its architecture is one of the major responsibilities that DevOps Engineers are tasked with. Such Engineers are highly skilled coders which enable them to script tools aimed at enhancing developer productivity.
3 Collaboration and Support : The DevOps’ Modus Operandi is to collaborate extensively and yield results in all aspects of their work. Everything ranging from technical analyses to deployment and monitoring is handled, with the focus to enhance overall system reliability and scalability. The diagram below gives one a clear picture of the values that define DevOps.
4 Knowledge : DevOps staff and Engineers aid in promotion of knowledge sharing and overall DevOps culture throughout the engineering department
5 Versatile Duties : DevOps staff and Engineers also take on work delegated by IT director, CTO, DevOps head and more. They will also perform similar duties to the designations mentioned above.
Standard Definition :
DevOps is an IT mindset that encourages communication, collaboration, integration and automation among software developers and IT operations in order to improve the speed and quality of delivering software.
Layman's Definition :
Any kind of automation that enables the opportunity for smoother Development, Operations, Support and delivery of the product is DevOps.
Indrustry's View :
There usually are two prominent area's where DevOps mindset is applied across industry :
a) Primary functionaries of DevOps like
• Continuous Integration,
• Continuous Delivery,
• Continuous Deployment,
• Infrastructure as an code or infrastructure Automation,
• CI/CD Pipeline Orchestration,
• Configuration Management and
• Cloud Management (AWS, Azure or GCP)
b) Secondary functionaries of DevOps like
• SCM tool Support,
• Code Quality tech support like Sonar, Veracode, Nexus etc.
• Middleware tech support for tools like NPM, Kafka, Redis, NGIMX, API Gateway, etc
• Infrastructure tech support for components like F5, DNS, Web Servers, Build Server Management etc
• OS Level support for miscellaneous activities lke Server Patching, Scripting for automation of server level tasks etc.
There is no exact answer to this. It depends on many factors.
The development team will most likely want more ownership over the pipeline, and therefore would want to own the templates / code required to achieve the end goal of automation.
The opposite side of this is also completely valid. An operations team could be the custodians of a pipeline and mandate a development team must meet certain standards and use their automation pipelines to be able to get into an environment or onto a platform.
If an environment is an island, and development teams are trying to get to that island. Each development team can build their own bridge to get there. Or the operations team can build a bridge and ask the development teams to use it. Both are valid and the end result is the same either way.
If the end result is the same, then the only thing that matters is how you apply it in the context of the organization, team(s) and the people you are working with to achieve that common goal.
The assigned developer (and scrum team) should be responsible for the complete delivery of all aspects of development through final deployment into production. This fosters the notions of ownership and empowerment, and focuses responsibility for the full life cycle delivery of the service (application).
DevOps engineering should be responsible for providing an optimal tool chain and environments for rapid and quality delivery. I see DevOps role as the development focused precedent to SRE. If SRE's maintain high performance, stable production environments, then DevOps team maintains optimal development and testing environments. In theory, DevOps should extend into the realm of SRE, conforming into a single team supporting the environments for rapid innovation with quality to meet the business needs.
Everything from committing of the code to production. This includes
Automation
Production Support
Writing automation scripts
Debugging Production Infrastructure
In short Devops = Infrastructure + Automation + Support
I haven't any real experience in BDD and I've recently discovered SpecFlow. I've read a bit about it (and Gherkin), I went through some screen casts, and I must say that I'm moderatly convinced. Of course, by nature, the examples provided as an introduction are relatively simple. Is anybody using SpecFlow on real (read "complex") projects and finding that tool helpful?
Gojko Adzic has written a whole book (www.specificationbyexample.com) where he interviewed various teams around the globe working according to these concepts for several years. The book not only describes there experience but also summarizes very well common challenges and benefits teams reported. I think this book can help convincing management as well as provides some guidance when starting with this. It is not a step-by-step cook book, though, neither does it talk in detail about specific tools (which is not necessary IMHO).
To talk about first hand experience, we (TechTalk) are using SpecFlow since several years in projects of different size, domains and architecture. We are doing mainly custom development in various domains (financial sector, government, GIS) and our projects are usually having a 2-9 months duration with a size of 150-500 PD. The largest projects we do with SpecFlow are 1800+ PD - these are long running programs for several years with ongoing frequent releases.
We are also using SpecFlow in product development, e.g. in SpecLog (www.speclog.net).
We are also coaching larger projects in ATDD and Specification-By-Example in various industries (automotive, financial services, ...) who are applying these concepts quite successfully. These projects are partly also on other platforms, e.g. on Java we used JBehave so far, although if I would start a project right now I would strongly consider Cucumber-JVM.
I also recommend checking out the (free) screen casts at skillsmatter.com who are running related conferences since several years (BDDX, CukeUp). These always have some experience reports from various domains and industries.
I wanted to know if there are any open source tool for load testing any web application.
Is LoadRunner a perfect tool from an enterprise perspective for this purpose?
Could you clarify your question a bit? Are you looking to take the queries generated by the web application and then to reproduce them with a performance testing tool directly against the database or are you looking to exercise the web app and then analyze the database?
As far as what is best, this is a very subjective item and it comes back to that most dangerous of concepts, "requirements." The requirements for one organization may point the way to one tool over another depending upon the technical needs of the application, the available skills within the existing/planned performance testing team and budget. Mercury certainly made the case for the ROI for LoadRunner on the enterprise level long before it became part of HP's software offerings with market responding by giving it the largest overall market share. However, as evidenced by it's non-monopoly position the requirements of other organizations have lead to the adoption of different tools.
Build your requirements; technical, skills required and business; then evaluate the various market offerings to see which one works for you. The more interfaces you add the more compelling a commercial tool becomes over an open source one. The greater your skills depth in your performance team the more flexibility you have in using an open source tool as you will need to build out some of the analytical pieces that a commercial tool includes by default. ...
I have some confusion about the terms SDLC and Software Process. With respect to these (and more or less similar) terms, a have a few question.
What is the difference between SDLC and Software Process ? (I understand SDLC is not just Waterfall).
Can we map SDLC with Unified Process ?
About the activities- Analysis in tradition waterfall model, Do we do Analysis in Unified Process (Any unified process- Agile or Rational) ?
SDLC stands for System Development Life Cycle, and it is a more or less generic term to describe whatever standard life cycle that you have implemented is.
SDLC is essentially your software process, but in my experience, most people associate it more directly with waterfall processes, as you indicated and more specifically, CMMI standards.
Typically with the SDLC, you will find that different groups have different methodologies to express it.
Since I don't recall the exact definition, there may be more linking it to the waterfall methods than just semantics. For instance, I believe Agile methodologies could be considered a type of SDLC, but I could be wrong about that.
I hope this helps.
SDLC in shortcut for software development life cycle for a software product that contains the process for a product the software from requirement software.. maintenance
the SDLC that contain viruses methodologies like waterfall ,scream,agile that follow each the process software from the requirement , design, implementation, testing, maintains but
different to how to apply this process some methodologies as agile do the multi-process in same time as implementation with design in want to write Document.
in a waterfall, methodology cont go to apply the next process until the previous process finish cont do mult process at the same time example cont be go implementation with the design in the same time you should be complete the design process cont be execution 2 process at same time
Software Process - is a set of activities and associated results that produce a software product. There are 4 fundamental process activities that are common to all software processes
Software Specification
Software Development
Software Validation
Software Evolution
SDLC - is the oldest and the most widely used approach in software engineering.It follows a number of sequential phases and partitioned set of activities. Based in an engineering/construction/production new.
Problem exploration
Feasibility Study
Requirement Gathering
Analysis
Design
Construction
IS implementation
Operation and Maintenance
Evoluation
- Phase out
I very much agree with you, SDLC dates back to the 1950s and it was the first framework introduced at the time. However, I have a few notes on the SDLC phases - I'd say that there are 7 stages of the SDLC:
1.Planning
2. Requirements Analysis
3. Design
4. Development
5. Testing
6. Deployment
7. Maintenance and improvement.
Today, there are a lot of SDLC models, Waterfall being the most popular one. Though, Agile is becoming quite popular lately - yet, I find a lot of teams to be highly disappointed of Agile. "We are constantly changing things that we never get anything done" - that's the most common phrase I hear.
What is the difference between SDLC and Software Process ? (I understand SDLC is not just Waterfall).
Ans: SDLC is the development lifecycle that is used in each and every project.SDLC defines all the standard phases which is very useful in software development. Software Process defines all the activities/phases to improve the quality of the product.
Software process is the testing lifecycle as it includes all the phases even the basic phases.
Can we map SDLC with Unified Process?
Ans: Yes you can map but only methodologies not the life cycle
Let's clear these queries one by one.
The difference between SDLC and Software Process:
Software Process or Software Development Process and Software Development Life Cycle - both are the concepts with a similar goal to develop a software.
There are multiple strategies or models available to develop a software. Like, Waterfall, agile, etc.
SDLC provides set of phases for the developers to follow. Each phase is a result of the privious phase.
The Unified Software Development Process or Unified Process is an iterative and incremental software development process framework.
For more details:
software process: https://www.geeksforgeeks.org/software-processes-in-software-engineering/
Software development life cycle: https://www.tatvasoft.com/outsourcing/2022/09/sdlc-best-practices.html
Yes, we can map SDLC with unified process.
You can go through this link for more details: https://www2.cdc.gov/cdcup/library/framework/mapping.htm
Unified Process, like most agile techniques, does not expect the general project plan defines when each use case is going to be implemented. So, object oriented analysis is required for the disign of the Information System.
For more details, use this reference: https://www.sciencedirect.com/topics/computer-science/unified-process
Has anybody been in or has seen a kind of "Surgical Team" as described in The Mythical Man Month? Have you heard of somebody actually implementing "Mill's Proposal"?
There is a lot of detail about the various roles in the book itself, but for those who haven't read the book, I found a website and a blog post which give a good summary. I've quoted the roles from the website below:
The Surgical Team
The surgeon is the chief programmer and the el-presidente of the whole
team. He produces all the
specifications, codes the entire
system the team is responsible for,
tests it, and drafts its supporting
documentation.
The copilot is the surgeon’s assistant. His main purpose is to
share in the thinking about design
issues – to serve as a sounding board,
as it were. The copilot represents the
team in meetings with other teams. He
knows the code intimately, and serves
as insurance in case of disaster to
the surgeon.
The toolsmith supports the surgeon and builds specialized utilities and
tools as may be required by his
surgeon. Each team has its dedicated
toolsmith in addition to any central
services provided by the rest of the
project infrastructure. The tester is
responsible for maintaining test cases
for testing the surgeon’s work as he
writes it. He is both an adversary who
devises test cases to measure against
the formal specs and devises test data
to be used in debugging.
The language lawyer, which can serve several surgeons, I a widely consulted
specialist who delights in the mastery
of the intricacies of the programming
languages and the operating systems
upon which the software must perform.
The administrator handles money, people, space, and machines. The
surgeon is the ultimate boss, with the
last word on all these issues, but the
day to day management of the issues
and interfacing with the
administrative machinery of the
project is the role of a professional
administrator. One administrator may
serve more than one team.
The editor edits and revises the documentation as drafted or dictated
by the surgeon and oversees the
mechanics of its production.
The program clerk, trained as a secretary, is responsible for
maintaining all the machine-readable
and human-readable technical records
generated by the team. All the filing
and indexing is the responsibility of
the program clerk.
The secretaries handle the project correspondence and non-project files.
We did use the surgical team approach of Brooks' at a startup we set up about 10 years ago. We were five people at the company plus a few others at the uni lab supporting us. The experience was technically great, but it didn't last long for business reasons. :-)