Engineering is the practical application of science and math to solve problems, and it is everywhere in the world around us.
Engineers are problem-solvers who want to make things work more efficiently and quickly and less expensively.
Software Engineering: It is the application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of software.
It is the application of engineering to software.
Origin Of Software Engineering
Software Engineering emerged because there were some problems of software development in earlier days. Some of these problems were:
** Cost and Budget Overruns ( Many software projects ran over budget and schedule )
** Property Damage ( Software defects can cause property damage. Poor software security allows hackers to steal identities, costing time, money, and reputations
** Life and Death ( Software defects can kill )
These and many other reasons led to the need for management of software development. This caused the origin of Software Engineering.
Goals of Software Engineering
- Create software cheaper
- Create software faster
- Create software that is easy to modify for unanticipated requirements
- Create software without bugs
- Create software that uses fewer resources to get the job done
- Create software to satisfy customer requirements
- Create software that is easy for developers to understand
- Create software that is easy for users to use
- Create software that open new doors (e.g. new business models were made possible by browser technologies)
Differences between SE and Traditional Engineering Disciplines
Software Engineering | Traditional Engineering Disciplines |
Software engineering is based on computer science, information science and discrete mathematics. | Traditional engineering is based on mathematics, science and empirical knowledge. |
Software engineers construct non-real (abstract) artefacts or objects. | Traditional engineers construct real artefacts or objects. |
In software engineering, product reliability measured by the no. of errors per thousand lines of source code. | In traditional engineering, product reliability measured by time to failure. |
Replication of software products is easy (copying CDs or downloading files). | Replication of products requires significant development effort. |
Software engineers often apply new and untested elements in software projects | Traditional engineers generally try to apply known and tested principles and limit the use of untested innovations to only those necessary to create a product that meets its requirements. |
Software engineering is about 50 years old. | Traditional engineering is thousands of years old. |
Software Development Process
A software development process is also known as a software development life cycle (SDLC)A software development process or life cycle is a structure imposed on the development of a software product. There are several models for such processes, each describing approaches to a variety of tasks or activities that take place during the process.
It defines all the tasks required for developing and maintaining software.
( Software process is a set of activities that leads to the production of a software product. )
Phases in Software Development
1.) Project Initiation
2.) Feasibility Study
3.) Requirement Analysis
4.) Design
5.) Coding and Testing
6.) Implementation
7.) Maintenance
For large projects, PLANNING phase is introduced after feasibility study
---------------- The description of phases part of the page is still under construction-----------------
1.) Project Initiation:: This phase involves getting order from the client for the software project. The client gives the requirements for the project. This phase identifies the project's primary objectives, assumptions, constraints (or limitations), outputs and criteria for accepting the project.
2.) Feasibility Study:: It is the study or analysis of the client's requirements to know whether the given project is practically fit enough to be developed. It checks whether the project is capable of being done with available resources. Feasibility study makes sure the program created is actually needed and will be useful to the intended users. Here, the following are studied:
Are there any alternatives?
Why would a developer build a software?
There are three types of feasibility study:
Economical : It checks will returns justify the investment in the project ? The economic feasibility study evaluate the cost of the software development against the ultimate income or benefits gets from the developed system.There must be scopes for profit after the successful Completion of the project.
Technical : It checks whether technology is available to implement the alternative ? The technical feasibility study compares the level of technology available in the software development firm and the level of technology required for the development of the product.Here the level of technology consists of the programming language, the hardware resources, Other software tools etc.
Operational : It checks will the software project be operationally feasible as per rules, regulations, laws, organizational culture, union agreements, etc. ? The proposed software must have high operational feasibility.The usability will be high.
3.) Requirement Analysis:: Requirements analysis involves getting the complete requirements for the software project.
The objective of this phase is to determine what the system must do to solve the problem (without describing how). This is done by Analyst (also called Requirements Analyst).
Requirements analysis includes three types of activities:
* Requirements gathering: the task of communicating with customers and users to determine what their requirements are. This is sometimes also called eliciting requirements.
* Analyzing requirements: determining whether the stated requirements are unclear, incomplete, ambiguous, or contradictory, and then resolving these issues.
* Recording requirements: Requirements might be documented in various forms, such as natural-language documents, use cases, user stories, or process specifications, or Software Requirements Specifications (SRS).
Analysts can use several techniques to elicit (or get) the requirements from the customer. These may include:
holding interviews and creating requirements lists,
studying existing documents and current mode of operations,
get details from users through questionnaires.
By making a detailed analysis of software project a detailed document or report is prepared in this phase. This document has details like project plan or schedule of the project, the cost estimated for developing and executing the system, target dates for each phase of delivery of system developed and so on. Requirements analysis phase is the base of software development process since further steps taken in software development life cycle would be based on the analysis made on this phase and so careful analysis has to be made in this phase.
4.) Design:: R
5.) Coding and Testing:: In the coding phase actual development of the system takes place. Based on the design documents prepared in the earlier phase, code is written in the programming language chosen. In the testing phase, software developed would be tested and reports are prepared about bugs or errors in software. In the testing phase, there are different levels and methods of testing like unit testing, system test and so on. Based on the need, the testing methods are chosen and reports are prepared about bugs. To ease the testing process, debuggers or testing tools are also available.
6.) Implementation:: R
7.) Maintenance:: R
-----------------------------------------------
------------- to be continued ----------------
----------------------------------------------
WATERFALL MODEL
Disadvantages of Waterfall Model:1. Rigid design and inflexible procedure
2. Difficult to respond to changing customer requirements
3. High risk associated with committing so much resources upfront
4. Difficult to keep stakeholders (or clients) interested and committed to the project till the final product release
5. Not suitable for large and complex software projects
-----------------------------------------------------------------------------------------
Project Planning
Project planning establishes a plan for Software Engineering work that follows.Planning = establish project goals and then activities and tasks (selecting a course of action) that will lead to their accomplishment. "Planning thus involves specifying the goals and objectives for a project and the strategies, policies, plans and procedures for achieving them."
It describes:
+ the technical tasks to be conducted,
+ the risks that are likely,
+ the resources that will be required,
+ the work products to be produced, and
+ a work schedule.
The purpose of project planning is to identify the scope of the project, estimate the work involved, and create a project schedule. Project planning begins with requirements that define the software to be developed. The project plan is then developed to describe the tasks that will lead to completion.
Why is it important?
Careful planning right from the beginning of the project can help to avoid costly mistakes. It provides an assurance that the project execution will accomplish its goals on schedule and within budget.
Basic Elements of a Project Plan |
Planning Activities:
The planning activities that you, with the help of your team members, will need to do for the project are listed below:
- To recruit and build the team
- To organize the project
- To identify and confirm the start and end dates through a project schedule
- To create the project budget
- To identify clearly the customer requirements for the final outcome
- To define the project scope boundaries - what it included and not included in the project
- To write a description of the final outcome
- To decide who will do what
- To assign accountability
Team Structure :
Team structure addresses the issue of organization of the individual project teams. There are some possible ways in which the individual project teams can be organized. There are mainly three formal team structures: + + chief programmer,
+ democratic, and
+ mixed team organizations
Chief Programmer Team:
In the chief programmer team (CPT) the complete authority and responsibility for the system rests with one individual--the chief programmer. In this team organization, a senior engineer provides the technical leadership and is designated as the chief programmer. The chief programmer partitions the task into small activities and assigns them to the team members.
In the chief programmer team (CPT) the complete authority and responsibility for the system rests with one individual--the chief programmer. In this team organization, a senior engineer provides the technical leadership and is designated as the chief programmer. The chief programmer partitions the task into small activities and assigns them to the team members.
Click on the image to view larger |
Democratic teams are characterized by everyone being a peer, or equal in another way. Each programmer is an equal member of this circle; the decisions a programmer makes or the opinions a programmer holds are considered of weight equal to the others.
An advantage to a democratic team is ease of replacement. Any engineer can be replaced by nearly anyone.
Click on the image for larger view |
Hierarchical Team Organization:
Hierarchical teams are organized like a tree. The hierarchical structure is also referred to as controlled decentralized. Strategic decision making is centralized at the top of the hierarchy. The project leader and group leaders are responsible for strategic decision making, setting goals, and partitioning the work among subgroups. These subgroups function as small egoless programming teams where authority and communication are decentralized.
Hierarchical Team Structure |
------------------------------------------------------------------------------------------------
Software Requirements Specifications
SRS document is an agreement between the developer and the customer covering the functional and non functional requirements of the software to be developed.
Parts of SRS Document:
The important parts of SRS document are:
- Functional requirements of the system,
- Non-functional requirements of the system, and
- Goals of implementation
=> Functional requirements:-
The functional requirements part discusses the functionalities required from the system. The system is considered to perform a set of functions. Each function of the system can be considered as a transformation of a set of input data to the corresponding set of output data.
=> Nonfunctional requirements:-
Nonfunctional requirements deal with the characteristics of the system which can not be expressed as functions - such as the maintainability of the system, portability of the system, usability of the system, etc.
Nonfunctional requirements may include:
# reliability issues,
# accuracy of results,
# human - computer interface issues,
# constraints on the system implementation, etc
=> Goals of implementation:-
The goals of implementation part documents some general suggestions regarding development. These suggestions guide trade-off among design goals. The goals of implementation section might document issues such as revisions to the system functionalities that may be required in the future, new devices to be
supported in the future, reusability issues, etc. These are the items which the developers might keep in their mind during development so that the developed system may meet some aspects that are not required immediately.
Properties of a good SRS document
The important properties of a good SRS document are the following:
=> Concise. The SRS document should be concise and at the same time unambiguous, consistent, and complete. Verbose and irrelevant descriptions reduce readability and also increase error possibilities.
=> Structured. It should be well-structured. A well-structured document is easy to understand and modify. In practice, the SRS document undergoes several revisions to cope up with the customer
requirements. Often, the customer requirements evolve over a period of time. Therefore, in order to make the modifications to the SRS document easy, it is important to make the document well-structured.
=> Black-box view. It should only specify what the system should do and refrain from stating how to do these. This means that the SRS document should specify the external behavior of the system and not
discuss the implementation issues. The SRS document should view the system to be developed as black box, and should specify the externally visible behavior of the system. For this reason, the SRS document is also called the black-box specification of a system.
=> Conceptual integrity. It should show conceptual integrity so that the reader can easily understand it. ƒ Response to undesired events. It should characterize acceptable responses to undesired events. These are called system response to exceptional conditions.
=> Verifiable. All requirements of the system as documented in the SRS document should be verifiable. This means that it should be possible to determine whether or not requirements have been met in an implementation.
Data Flow Diagrams (DFDs)
Data Flow Diagrams - DFD (also called data flow graphs) are commonly used during problem analysis.A DFD shows the flow of data through a system.
DFD is the pictorial representation of flow of data into, around, and out of the software.
The DFD aims to capture the transformations that take place within a system to the input data so that eventually the output data is produced. The agent that performs the transformation of data from one state to another is called a process (or a bubble). So a DFD shows the movement of data through the different transformation or process in the system.
It focuses on the process that transforms input to output.
It is also known as bubble chart.
The direction of flow may be from top to bottom or from left to right.
DFDs are basically of 2 types
Physical and logical ones.
Physical DFDs are used in the analysis phase to study the functioning of the current system.
Logical DFDs are used in the design phase for depicting the flow of data in proposed system.
Elements of DFDs
Data Flow Diagrams are composed of the four basic symbols shown below.
- Squares representing external entities, which are sources or destinations of data.
- Rounded rectangles representing processes, which represents an activity that transforms or manipulates the data (combines, reorders, converts, etc.).
- Arrows representing the data flows, which represents movement of data.
- Open-ended rectangles representing data stores, which represent data that is not moving (delayed data at rest) (including electronic stores such as databases or XML files and physical stores such as or filing cabinets or stacks of paper).
External Entities
External entities determine the system boundary. They are external to the system being studied. They are often beyond the area of influence of the developer.
These can represent another system or subsystem. These go on margins/edges of data flow diagram. External entities are named with appropriate name.
Processes
Processes are work or actions performed on incoming data flows to produce outgoing data flows. These show data transformation or change. Data coming into a process must be "worked on" or transformed in some way. Thus, all processes must have inputs and outputs. In some (rare) cases, data inputs or outputs will only be shown at more detailed levels of the diagrams. Each process in always "running" and ready to accept data.
Data Flow
Data flow represents the input (or output) of data to (or from) a process ("data in motion"). Data flows only data, not control. Represent the minimum essential data the process needs. Using only the minimum essential data reduces the dependence between processes. Data flows must begin and/or end at a process.
Data flows are always named. Name is not to include the word "data". Should be given unique names. Names should be some identifying noun. For example, order, payment, complaint.
Data Stores
or
Data Stores are repository for data that are temporarily or permanently recorded within the system. It is an "inventory" of data. These are common link between data and process models. Only processes may connect with data stores.
There can be two or more systems that share a data store. This can occur in the case of one system updating the data store, while the other system only accesses the data.
Example:
DFD of a system that pays workers |
In this DFD, there is one basic input data flow, the weekly time sheet, which originates from the source worker. The basic output is the pay check, the sink for which is also the worker. In this system, first the employee's record is retrieved, using the employee ID, which is contained in the time sheet. From the employee record, the rate of payment and overtime are obtained.
These rates and the regular and overtime hours (from the time sheet) are used to complete the payment. After total payment is determined, taxes are deducted. To computer the tax deduction, information from the tax rate file is used. The amount of tax deducted is recorded in the employee and company records. Finally, the paycheck is issued for the net pay. The amount paid is also recorded in company records.
Conventions used when drawing DFDs:
There are several common modeling rules to be followed:
- All processes must have at least one data flow in and one data flow out.
- All processes should modify the incoming data, producing new forms of outgoing data.
- Each data store must be involved with at least one data flow.
- Each external entity must be involved with at least one data flow.
- A data flow must be attached to at least one process.
Problems with data flow diagrams have been choosing bubbles appropriately, partitioning those bubbles in a meaningful and mutually agreed upon manner, the size of the documentation needed to understand the Data Flows, still strongly functional in nature and thus subject to frequent change though “data” flow is emphasized, “data” modeling is not, so there is little understanding of just what the subject matter of the system is about, and not only is it hard for the customer to follow how the concept is mapped into these data flows and bubbles, it has also been very hard for the designers who must shift the DFD organization into an implementable format
Data Dictionary:
A data dictionary is a structured repository of data about data. It is a set of rigorous definitions of all DFD data elements and data structures.
Data dictionary is a set of meta-data which contains the definition and representation of data elements.It gives a single point of reference of data repository of an organization. Data dictionary lists all data elements but does not say anything about relationships between data elements.
To define the data structure, different notations are used. These are similar to the notations for regular expression. Essentially, besides sequence or composition ( represented by + ) selection and iteration are included. Selection ( represented by vertical bar "|" ) means one or the other, and repitition ( represented by "*" ) means one or more occurances.
The data dictionary for the above DFD is shown below:
Weekly timesheet = Emplyee_Name + Employee_ID + {Regular_hours + overtime_hours}
Pay_rate = {Horly | Daily | Weekly} + Dollar_amount
Employee_Name = Last + First + Middle_Initial
Employee_ID = digit + digit + digit + digit
--------------------------------------------------------------------------------------
Software Metrics
Software Metrics: Software Metric is a quantitative measure of the degree to which a software possesses a given attribute.
Software metrics are numerical data related to software development. Metrics strongly support software project management activities.
Software metrics are an integral part of the state-of-the-practice in software engineering.
Software Metrics is the continuous application of measurement-based techniques to the software development process and its products to supply meaningful and timely management information, together with the use of those techniques to improve that process and its products.
Software metrics must provide the information needed by engineers for technical decisions.
Its goal is to provide objective, reproducible and quantifiable measurements.
Software metrics can perform one of four functions.
- Metrics can help us Understand more about our software products, processes and services.
- Metrics can be used to Evaluate our software products, processes and services against established standards and goals.
- Metrics can provide the information we need to Control resources and processes used to produce our software.
- Metrics can be used to Predict attributes of software entities in the future.
A metric quantifies a characteristic of a process or product. Metrics can be directly observable quantities or can be derived from one or more directly observable quantities.
Lines of Code (LOC): Also known as Source Lines of Code (SLOC).
Source lines of code (SLOC) is a software metric used to measure the size of a software program by counting the number of lines in the text of the program's source code. SLOC is typically used to predict the amount of effort that will be required to develop a program, as well as to estimate programming productivity or maintainability once the software is produced.
Advantages
-------------------------------------------------------------------------------------
- Scope for Automation of Counting: Since Line of Code is a physical entity; manual counting effort can be easily eliminated by automating the counting process. Small utilities may be developed for counting the LOC in a program. However, a code counting utility developed for a specific language cannot be used for other languages due to the syntactical and structural differences among languages.
- An Intuitive Metric: Line of Code serves as an intuitive metric for measuring the size of software because it can be seen and the effect of it can be visualized. Function points are said to be more of an objective metric which cannot be imagined as being a physical entity, it exists only in the logical space. This way, LOC comes in handy to express the size of software among programmers with low levels of experience.
Disadvantages
- Lack of Accountability: Lines of code measure suffers from some fundamental problems. Some think it isn't useful to measure the productivity of a project using only results from the coding phase, which usually accounts for only 30% to 35% of the overall effort.
- Lack of Cohesion with Functionality: Though experiments have repeatedly confirmed that effort is highly correlated with LOC, functionality is less well correlated with LOC. That is, skilled developers may be able to develop the same functionality with far less code, so one program with less LOC may exhibit more functionality than another similar program. In particular, LOC is a poor productivity measure of individuals, because a developer who develops only a few lines may still be more productive than a developer creating more lines of code.
- Adverse Impact on Estimation: Because of the fact presented under point #1, estimates based on lines of code can adversely go wrong, in all possibility.
- Developer’s Experience: Implementation of a specific logic differs based on the level of experience of the developer. Hence, number of lines of code differs from person to person. An experienced developer may implement certain functionality in fewer lines of code than another developer of relatively less experience does, though they use the same language.
- Difference in Languages: Consider two applications that provide the same functionality (screens, reports, databases). One of the applications is written in C++ and the other application written in a language like COBOL. The number of function points would be exactly the same, but aspects of the application would be different. The lines of code needed to develop the application would certainly not be the same. As a consequence, the amount of effort required to develop the application would be different (hours per function point). Unlike Lines of Code, the number of Function Points will remain constant.
- Advent of GUI Tools: With the advent of GUI-based programming languages and tools such as Visual Basic, programmers can write relatively little code and achieve high levels of functionality. For example, instead of writing a program to create a window and draw a button, a user with a GUI tool can use drag-and-drop and other mouse operations to place components on a workspace. Code that is automatically generated by a GUI tool is not usually taken into consideration when using LOC methods of measurement. This results in variation between languages; the same task that can be done in a single line of code (or no code at all) in one language may require several lines of code in another.
- Problems with Multiple Languages: In today’s software scenario, software is often developed in more than one language. Very often, a number of languages are employed depending on the complexity and requirements. Tracking and reporting of productivity and defect rates poses a serious problem in this case since defects cannot be attributed to a particular language subsequent to integration of the system. Function Point stands out to be the best measure of size in this case.
- Lack of Counting Standards: There is no standard definition of what a line of code is. Do comments count? Are data declarations included? What happens if a statement extends over several lines? – These are the questions that often arise. Though organizations like SEI and IEEE have published some guidelines in an attempt to standardize counting, it is difficult to put these into practice especially in the face of newer and newer languages being introduced every year.
- Psychology: A programmer whose productivity is being measured in lines of code will have an incentive to write unnecessarily verbose code. The more management is focusing on lines of code, the more incentive the programmer has to expand his code with unneeded complexity. This is undesirable since increased complexity can lead to increased cost of maintenance and increased effort required for bug fixing.
-------------------------------------------------------------------------------------
Software Design
Characteristics of a good software design:
- Correctness: A good design should correctly implement all the functionalities identified in the SRS document.
- Understandability: A good design is easily understandable.
- Efficiency: It should be efficient.
- Maintainability: It should be easily amenable to change
Design Objectives
( Click on it for description ) |
Design Principles
- Design should be traceable to the analysis model
- Always consider the architecture of the system to be built
- Design of data is as important as design of processing functions
- Interfaces ( both internal and external ) must be designed
- User interface design should be tuned to the needs of the end-user
- Component-level design should be functionally independent
- Components should be loosely coupled to one another and to the external environment
- Design representations (models) should be easily understandable
- The design should be developed iteratively. With each iteration, the designer should strive for greater simplicity
Design Concepts
Modularity
A software can be divided into separate addressable elements called modules and that are integrated to solve the problem.
A software can be divided into separate addressable elements called modules and that are integrated to solve the problem.
Abstraction
Abstraction is a technique in which unwanted details are not included and only the needed information is given.In the highest level of abstraction the solution is stated in general terms.In lowest level of abstraction the solution is given in detail.
Abstraction manages complexity by focusing essential characteristics and hiding implementation details. It allows postponement of various design decisions that occur at various levels of analysis, example, algorithmic considerations, architectural and structural considerations, platform considerations.
Types of abstraction:
Information Hiding:
Information hiding is an important means of achieving abstraction, i.e., design decisions that are subject to change should be hidden behind abstract interfaces. Application software should communicate only through well-defined interfaces.
Types of information to be hidden:
Data representations
Algorithms (example, sorting or searching techniques)
Input and Output Formats (example, machine dependencies)
Abstraction is a technique in which unwanted details are not included and only the needed information is given.In the highest level of abstraction the solution is stated in general terms.In lowest level of abstraction the solution is given in detail.
Abstraction manages complexity by focusing essential characteristics and hiding implementation details. It allows postponement of various design decisions that occur at various levels of analysis, example, algorithmic considerations, architectural and structural considerations, platform considerations.
Types of abstraction:
- Procedural Abstraction
- Data Abstraction
- Control Abstraction
Information Hiding:
Information hiding is an important means of achieving abstraction, i.e., design decisions that are subject to change should be hidden behind abstract interfaces. Application software should communicate only through well-defined interfaces.
Types of information to be hidden:
Data representations
Algorithms (example, sorting or searching techniques)
Input and Output Formats (example, machine dependencies)
Functional Independence:
Functional independence is measured using two terms cohesion and coupling.
- Cohesion
- Coupling
It is the relationship of elements within a module. It is a measurement to calculate the functional strength of modules of a system.
It is a measure of relative functional strength of a module. There exists the following types of cohesion.
- Logical Cohesion
- Temporal Cohesion
- Procedure Cohesion
- Communicational Cohesion
Temporal Cohesion exists when a module contains tasks that are related in such a way that all the task must be executed within the same time limit.
The Procedural Cohesion exists when the processing elements of a module are related and must be executed in a specific order.
The Communication Cohesion exists when all the processing elements concentrate on one area of a data structure.
Coupling:
It is the degree of interdependence between two modules of a system.
Coupling is a measure of relative independence among modules .That is it is a measure of interconnection among modules..There exists the following types of coupling:
- Data coupling
- Stamp Coupling
- Control Coupling
- External Coupling
- Common Coupling
- Content Coupling
Stamp coupling exists when a module access another module through a data structure.
Control coupling exists when the control is passed between modules using a control variable.
External coupling exists when modules are connected to an environment external to the software.
Common coupling exists when the modules share the same global variable.
Content Coupling exists when one module make use of data or control information maintained in another module.Also content coupling occurs when branches are made into the middle of a module.
Effective modular design must have High Cohesion and Low Coupling.
--------------------------------------------------------------------
Structured Analysis:
It is a set of techniques and tools used by the system analyst to develop a new kind of system specifications that are easily understandable to the user.
-----------------------------------------------------------------------------------
TESTING
Software testing is any activity aimed at evaluating an attribute or capability of a program or system and determining that it meets its required results.Software Testing is the process of executing a program or system with the intent of finding errors.
Test techniques include, but are not limited to, the process of executing a program or application with the intent of finding software bugs (errors or other defects).
Testing is usually performed for the following purposes:
- To improve quality.
- For Verification & Validation (V&V)
- For reliability estimation
(NOTE:
• Faults can result in errors. Errors can lead to system failures
• Errors are the effect of faults. Failures are the effect of error )
Fault: A fault is the cause of an error.
Fault is a physical defect, imperfection or flaw that occurs in hardware or software.
It is an incorrect step, process, or data definition in a computer program which causes the program to perform in an unintended or unanticipated manner. It is an inherent weakness of the design or implementation.
Fault avoidance – using techniques and procedures which aim to avoid the introduction of faults during any phase of the safety lifecycle of the safety-related system.
Fault tolerance – the ability of a functional unit to continue to perform a required function in the presence of faults or errors.
A fault is a defect that gives rise to an error.
Error: An error is a discrepancy between a computed, observed, or measured value or condition and the true, specified, or theoretically correct value or condition.
Error is a deviation from correctness or accuracy.
An error is caused by a fault and may propagate to become a failure.
An error is, in short, a detected deviation from the agreed specification of requirements.
Failure:
Failure is a non-performance of some action that is due or expected.
It is the inability of a system or component to perform its required functions within specified performance requirements.
A system is said to have a failure if the service it delivers to the user deviates from compliance with the system specifications.
Levels of Testing
Levels of Testing |
System Testing: Testing that attempts to discover defects that are properties of the entire system rather than of its individual components. The purpose of this test is to evaluate the system’s compliance with the specified requirements.
It includes:
- Recovery Testing
- Performance Testing
- Security Testing
- Stress Testing
Recovery testing is a system test that forces the system to fail in a variety of ways and verifies that recovery is properly performed. If recovery is automatic (performed by the system itself), reinitialization, checkpointing mechanisms, data recovery, and restart are evluated for correctness. If recovery requires human intervention, the mean-time-to-repair (MTTR) is evaluatedd to determine whether it is within acceptable limits.
Recovery testing tests the response of the system to the presence of faults, or loss of power, devices, services, data, etc. The system is subjected to the loss of the mentioned resources (as applicable and discussed in the SRS document) and it is checked if the system recovers satisfactorily. For example, the printer can be disconnected to check if the system hangs. Or, the power may be shut down to check the extent of data loss and corruption.
Performance testing is in general executed to determine how a system or sub-system performs in terms of responsiveness and resource usage.
To test the performance of the software you need to simulate its deployment environment and simulate the traffic that it will receive when it is in use.
Stress Testing is performance testing at higher than normal simulated loads. Stressing runs the system or application beyond the limits of its specified requirements to determine the load under which it fails and how it fails.
Stress testing is also known as endurance testing. Stress testing is especially important for systems that usually operate below the maximum capacity but are severely stressed at some peak demand hours.
Input data volume, input data rate, processing time, utilization of memory, etc. are tested beyond the designed capacity.
For example, suppose an operating system is supposed to support 15 multiprogrammed jobs, the system is stressed by attempting to run 15 or more jobs simultaneously.
Stress testing is done to enable prgrams to face abnormal situations.
Stress testing executes a system in a manner that demands resources in abnormal quantity, frequency, or volume. In this:
· test cases that require maximum memory and other resources are executed,
· test cases that may cause memory management problems are designed,
· input data rates may be increased to determine how input functions will respond.
Security testing verifies that protection mechanisms built into a system will protect it from improper penetration. During security testing, the tester plays the role of the person who desires to penetrate the system. The tester may:
· attempt to acquire passwords,
· attack the system with custom software designed to breakdown any defenses that have been constructed,
· deny service to others,
· purposely cause system errors, hoping to penetrate during recovery,
· browse through insecure data, hoping to find the key to system entry.
Security Testing:Testing which confirms that the program can restrict access to authorized personnel and that the authorized personnel can access the functions available to their security level.
Acceptance testing-> its purpose is to ensure that the product meets minimum defined standards of quality prior to it being accept by the client or customer.
Often the client will have his end-users to conduct the testing to verify the software has been implemented to their satisfaction (this is called “User Acceptance Testing” or “UAT”).
Acceptance testing also typically focusses on artefacts outside the software itself. These might include : manuals and documentation; process changes; training material;
Acceptance Tests: Also called 'customer tests'.
Acceptance Testing:
Acceptance Testing:
•Demonstrates satisfaction of user
•Users are essential part of process
•Usually merged with System Testing
•Done by test team and customer
•Done in simulated environment/real environment
Verification and Validation
VERIFICATION
|
VALIDATION
|
Verification refers to the set of activities that ensure that the software correctly implements a specific function.
|
Validation refers to the set of activities that ensure that the software that has been built is traceable to customer requirements
|
Are we building the product right?
|
Are we building the right product?
|
It is the process of evaluating work-products (not the actual final product) of a development phase to determine whether they meet the specified requirements for that phase.
|
It is the process of evaluating software during or at the end of the development process to determine whether it satisfies specified business requirements.
|
Its objective is to ensure that the product is being built according to the requirements and design specifications. In other words, to ensure that work products meet their specified requirements.
|
Its objective is to ensure that the product actually meets the user’s needs, and that the specifications were correct in the first place. In other words, to demonstrate that the product fulfils its intended use when placed in its intended environment.
|
Plans, Requirement Specs, Design Specs, Code, and Test Cases are evaluated during verification.
|
The actual product/software is evaluated during validation.
|
Through verification, we make sure the product behaves the way we want it to.
|
Through validation, we check to make sure that somewhere in the process a mistake
hasn’t been made such that the product build is not what the customer asked for;
validation always involves comparison against requirements.
|
Verification activities involve:
|
Validation activities involve:
|
BLACK BOX TESTING
It is a software testing technique whereby the internal workings of the item being tested are not known by the tester.
Black box testing (also called functional testing) is testing that ignores the internal mechanism of a system or component and focuses solely on the outputs generated in response to selected inputs and execution conditions.
Black-box testing is a method of software testing that tests the functionality of an application.
For example, in a black box test on software design the tester only knows the inputs and what the expected outcomes should be and not how the program arrives at those outputs.
Specific knowledge of the application's code/internal structure and programming knowledge in general is not required. The tester is only aware of what the software is supposed to do, but not how.
Test cases are built around specifications and requirements, i.e., what the application is supposed to do. It uses external descriptions of the software, including specifications, requirements, and designs to derive test cases.
These tests can be functional or non-functional, though usually functional.
Also known as functional testing. Because only the functionality of the software module is of concern, black-box testing also mainly refers to functional testing.
It is also termed data-driven, input/output driven or requirements-based testing.
The tester treats the software under test as a black box -- only the inputs, outputs and specification are visible, and the functionality is determined by observing the outputs to corresponding inputs.
This method of test can be applied to all levels of software testing: unit, integration, system and acceptance.
The types of black box test cases (functional, system, and acceptance) User Acceptance Testing (UAT) and Systems Testing are classic example of black-box testing.
========================================================================
SPIRAL MODEL
The spiral model combines the features of the prototyping and the waterfall model. The spiral model is intended for large, expensive and complicated projects.
The spiral model was defined by Barry Boehm.
The key characteristic of a Spiral model is risk management at regular stages in the development cycle.The spiral model allows for incremental releases of the product, or incremental refinement through each time around the spiral. The spiral model also explicitly includes risk management within software development. Identifying major risks, both technical and managerial, and determining how to lessen the risk helps keep the software development process under control.
SPIRAL MODEL |
A spiral model is divided into a number of framework activities, also called task
regions.
• Customer communication—tasks required to establish effective communication
between developer and customer.
• Planning—tasks required to define resources, timelines, and other projectrelated
information.
• Risk analysis—tasks required to assess both technical and management
risks.
• Engineering—tasks required to build one or more representations of the
application.
• Construction and release—tasks required to construct, test, install, and
provide user support (e.g., documentation and training).
• Customer evaluation—tasks required to obtain customer feedback based
on evaluation of the software representations created during the engineering
stage and implemented during the installation stage.
At each iteration around the cycle, the products are extensions of an earlier product. This model uses many of the same phases as the waterfall model, in essentially the same order, separated by planning, risk assessment, and the building of prototypes and simulations.
The software engineering team moves around the spiral in a clockwise direction, beginning at the center.
The first circuit around the spiral might result in the development of a product specification; subsequent passes around the spiral might be used to develop a prototype and then progressively more sophisticated versions of the software. Each pass through the planning region results in adjustments to the project plan. Cost and schedule are adjusted based on feedback derived from customer evaluation. In addition, the project manager adjusts the planned number of iterations required to complete the software.
The
spiral lifecycle model allows for requirements or elements of the
product to be added in when they become available or known.
ADVANTAGES:
- The spiral model forces early user involvement in the system development effort.
- Because software evolves as the process progresses, the developer and customer better understand and react to risks at each evolutionary level.
- You need not define the entire requirements in detail at first.
- Its design flexibility allows changes to be implemented at several stages of the project.
- The process of building up large systems in small segments makes it easier to do cost calculations; and
- The client, who will be involved in the development of each segment, retains control over the direction and implementation of the project.
- As the model continues towards final phase, the customer's expertise on new system grows, enabling smooth development of the product meeting client's needs.
- Software engineers can get their hands in and start working on a project earlier.
DISADVANTAGES:
1. Spiral models work best for large projects only.
2. It demands considerable risk assessment expertise and relies on this expertise for success.
3. The
spiral model emphasizes risk analysis, and thus requires customers to
accept this analysis and act on it. This requires both trust in the
developer as well as the willingness to spend more to fix the issues,
which is the reason why this model is often used for large-scale
internal software development.
4. If the implementation of risk analysis will greatly affect the profits of the project, the spiral model should not be used.
5. Software developers have to actively look for possible risks, and analyze it accurately for the spiral model to work.
6. Spiral
models work on a protocol, which needs to be followed strictly for its
smooth operation. Sometimes it becomes difficult to follow this
protocol.
7. Evaluating the risks involved in the project can shoot up the cost and it may be higher than the cost for building the system.
===============================================================================
Difference Between Waterfall Model and Spiral Model
While
in the spiral model, the customer is made aware of all the happenings
in the software development, in the waterfall model the customer is not
involved. This often leads to situations, where the software is not
developed according to the needs of the customer. In the spiral model,
the customer is involved in the software development process from the
word go. This helps in ensuring that the software meets the needs of the
customer.
In
the waterfall model, when the development process shifts to the next
stage, there is no going back. This often leads to roadblocks,
especially during the coding phase. Many times it is seen that the
design of the software looks feasible on paper, however, in the
implementation phase it may be difficult to code for the same. However,
in the spiral model, since there are different iterations, it is rather
easier to change the design and make the software feasible.
In
the spiral model, one can revisit the different phases of software
development, as many times as one wants, during the entire development
process. This also helps in back tracking, reversing or revising the
process. However, the same is not possible in the waterfall model, which
allows no such scope.
Often
people have the waterfall model or spiral model confusion due to the
fact, that the spiral model seems to be a complex model. It can be
attributed to the fact that there are many iterations, which go into the
model. At the same time, often there is no documentation involved in
the spiral model, which makes it difficult to keep a track of the entire
process. On the other hand, the waterfall model has sequential
progression, along with clear documentation of the entire process. This
ensures one has a better hold over the entire process.