Suppliers can generally be categorised using the so-called capability maturity model, that is a generalised view of an organisation’s ability to deliver consistent products or outputs.
The Capability Maturity Model (CMM) for Supplier Assessment is defined as:
- Level-1 or Initial Level also called ad hoc or chaotic!
- Level-2 or Repeatable Level where processes depend on individuals (“champions”)
- Level-3 or Defined Level where processes are institutionalised (and sanctioned by management)
- Level-4 or Managed Level where activities are measured and provide feedback for resource allocation (process itself does not change)
- Level-5 or Optimising Level where process allows feedback of information to change process itself (continuous improvement)
Views on software reliability
Software unlike hardware has no ‘bathtub curve’ of burn-in over time or wear out. For software failures are characterised as either solvable, repeatable, logic failures or by random ‘bugs’. The latter are generally difficult to find and when found, indicate repeatable, solvable, logic failures and that more bugs are possible! A ‘bug’ being an as yet, unrepeatable, to be solved, logic failure.
The ‘robustness’ of a system containing software, is a function of the fault tolerance and avoidance built into the system (the system architecture) set against a background of software defect detection and clearance during system build. Measurement of these factors is not an absolute science and many other issues complicate the degree to which the system under assessment can be examined. Some of these are:
- The maturity of the organisation producing the software
- The degree of reuse in the software production
- The complexity of the functional and logical interactions within the system of interest
- The internal coupling of the system or of its parts
- The complexity of the interface between the software parts and between the software and the hardware elements of the system
- The ‘Criticality’ of the systems contribution to the overall system availability and reliability assessment
The scope of these conditions make it impossible to derive one single mechanism for assessment of software reliability and availability prior to ‘in-service’ availability. For software, the concept of reliability and availability are neither meaningful nor easily measurable in absolute terms, being terms that can only be used in hardware and systems worlds.
Thus I propose the use of a ‘Robustness Factor’ generated from Risk Management processes and have found the following policy and approach to be valuable in assessing systems containing software:
- The ‘Criticality’ of the systems contribution to the overall system capability (needs)
- The capability of the organisation producing the system
- The degree of change to previous systems compared to the degree of reuse at appropriate system, software system, software configured item and software component levels.
A new development is 100% change, so from this pseudo three-axis view, a relative ranking of ‘degrees of concern’ can be established. ‘Robustness’ is a subjective outcome of the above factors appraised by the use perhaps software risk assessment questionnaires.
After risk assessment, a system level view of robustness should be undertaken. If robustness is low (High risk) there will (of course) need to be a more managed approach, which should use ‘smart’ principles and ideally customer-supplier partnering to promote increased understanding of defect probability and hence to reduce defects in the final product. This approach is not specifically intrusive and is usually characterised by joint understanding, management and measurement together with the use of established Quality Assurance staff. However, challenging an external organisation’s own quality assurance pedigree is generally intrusive and often viewed as hostile – and certainly not in keeping with the principles of Partnering.
This ‘assurance of ensurance’ approach attempts to gather data on the key measures of Size, Time, Effort and Defects and a comparison of achieved levels against estimations to produce a ‘guidance system that works’ for the particular development. With time, the confidence and process improvement achieved during development start up activities gives greater strength to the prediction of in-service ‘robustness’ that can then be extrapolated from early defect density characteristics and initial test results. However, there needs to a stage when conducting measurements on integrated software products before such measures become meaningful, so this managed approach does not give early indication of in-service ‘robustness’, but if followed faithfully, the confidence in the predictions made from results measured during the integrated test phase is high.
An alternative approach, possible only with an organisation of suitable maturity, is to use one of the ‘modern’ process measurement ‘tool kits’ to predict reliability at an early stage, however restrictions on the utility of such tools are:
- The organisation has to be capable of measuring work processes with confidence, ease and accuracy
- The development process and product life-cycle need to be matched to the toolkit assumptions and methods
- The development process needs to be repeatable or repeated from a similar previous project
© D L Bird 2005