Standards
Groups of international experts sharing knowledge to develop solutions to common problems found in a range of activities.
Standards help with:
- Sharing knowledge
- Identifying aspects/pitfalls
- Understanding why processes exist:
- Chesterton’s Fence
- Regulations are written in blood
- Creating a shared language
- Creating regulations/policies
- Understanding and addressing problems/challenges
- Giving customers some confidence in products/organizations
- And customers expect others to use them or at least know about them
- Allows consistency and interoperability between systems
Some organizations:
- ISO: International Organization for Standardization
- IEEE Standards Association
- International Electrotechnical Commission
Problems:
- Competing standards
- Usually very generic
- Can be heavy in red tape/process: requires lots of polices, documents, data gathering
- Little immediate reward; very boring
- May require certification to give confidence to customers
Understanding the standard does not mean you understand how to apply them/implement the solution or attain quality.
One of UC’s (5th?) student management system:
- 26 million paid initially to create new system, 2 million per year support
- Delivered
- Crashes; enrollment delayed
- Provider says issue with environment
- Buy new servers; still not working
- Upgrade network; still not working
- Give up; go back to old system
- Millions wasted, students may have enrolled elsewhere due to delay, admin delayed, staff annoyed
Quality and Service Standards
IEEE Standard 1012: system, software, and hardware verification and validation.
Determines if requirements are:
- Correct
- Complete
- Accurate
- Consistent
- Testable
4 integrity levels.
Verification and Validation (V & V)
Comparison:
- Verification:
- Checking the process during the life cycle activity
- Are we building the product correctly?
- There is some correlation between process and product; hence, a good process gives some confidence in the quality of the product
- Validation:
- Checking that the product meets the requirements at the end of life cycle
- Includes non-functional requirements, things that the customers may not have thought about
- Did we build the correct product?
- Giving customers confidence in the product
- Checking that the product meets the requirements at the end of life cycle
Verification:
- Evaluating work products during the life cycle activity to determine if it is being built according to the requirements
- Evaluation of:
- Plans
- Requirement specifications
- Design specification
- Code
- Test cases
- Through the use of:
- Reviews
- e.g. building on different OSes on Erskine (‘customer’) computers, under heavy load
- Walkthroughs
- Informally going through code and/or features with developers
- Reverse walkthrough: customer explains to developer how to use products to find misunderstandings
- Inspections
- Standups, formal sessions etc.
- Reviews
Validation:
- Checking that the code matches user needs and fulfills its intended use
- Were the specifications even correct?
- Evaluation of product/software
- Through the use of testing (e.g. acceptance testing)
Software integrity level (SIL)
- Negligible consequences if element fails - mitigation not required
- Minor consequences if element fails - complete mitigation possible
- Serious consequences if element fails
- Permanent injury, major system degradation, economic/social impact
- e.g. bank accounts etc.
- Partial to complete mitigation required
- Lots of quality assurance
- Grave consequences - no mitigation possible
- loss of life, systems, economic or social loss
NASA matrix:
Consequences
^
| SIL3 SIL4
|
| SIL1 SIL2
--------------->
Error Potential
Engineering V Model
Waterfall-type development lifecycle. On the left, the requirements and design are verified and validated; on the right, the system is verified and validated.
Stakeholder <------------------------ User acceptance
requirements Validates testing
\ ^
\ /
v Verifies /
System <---------------- System integration
requirements testing
\ ^
\ /
v Verifies /
Subsystem <----------- Integration
requirements Testing
\ ^
\ /
v Verifies /
Unit/component <------- Unit
requirements testing
\ ^
\ /
v /
Development
Task: SENG302 Verification and Validation
Verification (remember; verification requires artifacts that the assessor can view):
- Scrum board, burn-down chart
- Quality drops by the end of the sprint
- Code smells: Sonarqube
- Ceremonies: output of formal sessions
- Postman testing results
- Cucumber: integration testing results
- Were alternative flows missed?
- GitLab code reviews: DoD, comments, feedback
Did we have enough work products/artifacts to validate?
Design decisions: if not recorded, harder to validate/verify
Validation:
- Checking against ACs
- User testing: summaries of results
- Did we verify that the product was usable and against non-functional requirements, or just check the ACs we met?
- Feedback on story success
- Manual testing
- Record of steps, results and when they were run
- Of course, they were run multiple times right?
- Gets unwieldy when there are too many tests
- Record of steps, results and when they were run
As a customer to a SENG302 team, how does the team give confidence that they can deliver?
- Look at previous projects: verify/validate, independent auditors?
- Frequent meetings with team/leader?
- Output of previous sprints
- Improvements between sprints? Are they making the same mistakes every time?
- Do they have processes to ensure everyone can deliver?
- e.g. code style, code reviews, white-box testing
Problem: a large number of software projects fail. Why?
- Each software project is seen as uncharted territory
- Predictability is low
- Quantitative details (work artifacts e.g. metrics) either do not exist or exist to a superficial level with a single projects
- This makes it hard to gain confidence
- Software projects cost a lot of money
How can a software company ensure high-quality, low failure, high predictability and consistency?
Capability Maturity Model
Military software contracts were often late, failed, or went over-budget. The US DoD Software Engineering Institute developed the capability maturity model to quantify how mature a software business and assess its practices, process and behaviors.
Five aspects of CMM:
- Maturity levels
- Key process areas (KPA):
- Activities that, when done together, lead to an important goal being accomplished
- Goals:
- State required for key process area to be met
- Must include scope, boundaries, and intent of each KPA
- ‘Goals met’ define capability
- Common features:
- Practices that implement/institutionalize a KPA
- Five common features:
- Commitment to perform
- Ability to perform
- Activities performed
- Measurement and analysis
- Verifying implementation
- e.g. everyone understands standups, sprint planning, but also organization-specific processes or processes where each organization has their own unique twist
- Key practices
- Elements that contribute most effectively to implementation and institutionalization (including infrastructure)
- e.g. employee on-boarding processes
Maturity levels:
- Initial:
- Immature
- Ad-hoc, chaotic
- Uncontrolled
- Reactive
- Unstable environment
- Technical debt not considered
- Success dependent on competence and heroics of people
- Most SENG302 teams here at the end of the year
- Repeatable:
- Basic project management
- More than just burndown/burnup charts
- Some processes repeatable
- Requires processes to be explicit and documented
- Repeatable processes could have consistent results
- Repeatable processes likely to be project specific, not organization wide
- Defined:
- Sets of defined standard processes
- Documented
- Standardized
- Integrated
- Improvements documented over time
- Not necessarily validated
- This helped us last time, will probably help us in other projects
- User may not be competent in the following the processes
- Managed (capable):
- Processes are validated
- Enough data to prove the effectiveness of their processes
- Processes are monitored and controlled; data collected and analyzed
- Processes are effective across a range of conditions/projects
- e.g. different technologies, languages, teams
- Users competent in the processes
- Easily adapted to new projects without measurable loss in performance
- Performance is quantifiably predictable
- Optimizing (efficient):
- Processes tweaked and quantifiably improved over time by analyzing performance patterns, while also maintaining predictability and meeting improvement objectives
- Quantitative understanding of variation
- Everyone involved with improving processes
Capability Maturity Model Integrity (CMMI)
Successor to CMM by the Carnegie Mellon University.
Focuses more on results rather than activities when compared to the CMM.
CMM is based heavily on the paper trail; CMMI focused more on strategy (but still a lot of paper/documentation).
- Initial
- Managed
- Processes are planned, performed, measured, controlled
- Commitments with stakeholders, revised as needed
- Defined
- Standards, procedures, tools, methods created
- Developers can move between projects easily - consistency between teams and projects
- Quantitatively managed
- Also looks at sub-processes
- Optimized
- Continuous improvement; incremental/innovative technological improvements
Models for:
- Acquisition:
- Problems relating to suppliers and expectations
- Unique KPAs:
- Agreement management
- Acquisitions requirements development
- Acquisitions technical management
- Acquisition validation, solicitation, supplier agreement development
- Development:
- Any type of product development
- Trying to eliminate defects in products/services
- Unique KPAs:
- Product integration
- Requirements development
- Technical solution
- Validation and verification
- Services:
- Service demand, maintaining high-levels of customer services
- Unique KPAs:
- Capability and availability management
- Incident resolution and prevention
- Service continuity
- Service delivery
- Service system development
- Service system transition
- Strategic service management
Appraisals:
- Feedback on business’s maturity and how to improve
- Appraisal by certified third parties
Pros and Cons
Pros:
- Puts more emphasis on management
- Things must be done, not just developers wanting to do things
- Takes the onus of performance off the developer or even the team
- Consistency across projects, organizations, and time
- On-boarding new developers easier
- Learned information is not lost - processes are improved over time across the entire organization
- Bus factor (to a point)
- Figuring out the root cause of problems
- Emphasis on reflection, self-evaluation, monitoring, critiquing and continual improvement
Cons:
- Much more risk averse
- Maintaining their CMI certification becomes very important
- Level hunting/level up
- Instead of making the team better and improving technical performance, the organization is always looking to improve their paperwork, overhead etc. and gain a higher certification level
- Individual/team development
- Focus on process/management, not improving technical skills
- Process heavy
- Standards/processes do not automatically equate to quality
- No guarantee that the project will actually be developed using these processes; paperwork does not mean the team is actually following it
- Aimed more at strategic management, not development
Immaturity Models
Businesses that are below level 1: anti-patterns to avoid.
- Level 0: Negligence
- The company does not care about processes
- All problems are perceived to be technical problems
- Managerial/QA activities are deemed to be overhead, superfluous to the ‘actual’ work
- Reliance on silver bullets
- Level -1: Obstructive
- Company forcefully goes against productive processes; counterproductive processes imposed instead
- Processes are rigidly defined, adherence to the form is stressed
- Ritualistic ceremonies
- Collective management precludes assigning responsibility
- Flat management: lots of managers at the same level
- Conflicting decisions ensure developers are always doing something wrong
- Status quo über alles: status quo above all else
- Don’t do anything different; don’t try and improve anything
- Level -2: Contemptuous
- Disregard for good engineering institutionalized
- ‘Get stuff done’; arrogance against those following processes
- ‘It’s all academic’
- Process improvement activities actively disregarded
- Complete lack of training program
- Level -3: Undermining
- Discrediting competitors and pointing the finger at others rather than improving themselves
- Rewarding failure and poor performance