All Files in ‘SENG401 (2022-S1)’ Merged

01. Introduction

Weighting:

4th year:

a ‘Senior’ engineer should have:

What does quality mean?

SENG401 is about critical thinking; careful consideration problems, and recommendations supported with justifications and evidence.

Informal debates with the class. The class split into two; each side takes extreme positions (makes it hard to defend, requires examples).

Debate: “You should always/never document code”.

02. Principles

Controversial topic debates: defending extreme viewpoints is difficult and requires research to justify the viewpoint.

Software engineering principles not black and white:

Technical Debt

Design decisions made in the past under circumstances that are no longer relevant Conscious, un-ideal decisions made in the past that must be corrected

Ward Cunningham, 1992: quick and easy approach that comes with interest - additional work that must be done in the future: the longer you wait, the more code relies on the debt - interest grows over time.

Design stamina hypothesis (Martin Fowler): in a time-functionality graph:

Types of Technical Debt

Two axes: deliberate/inadvertent, and reckless/prudent

Reasons for Debt

Caused by:

Measuring Technical Debt

302: a lot of debt by the end of the year.

Measure how much technical debt there is by:

Types of debt:

Interest rates:

Positive/negative value, visible/invisible attributes:

Pick a process/framework (Scrum/Kanban/Waterfall): Which part is devoted to Technical Debt correction/payment?

Fan-in vs Fan-out

Refactor vs Re-engineering

Hence, refactorings should be done as-you-go while re-engineerings should be done infrequently and only after careful planning.

Reuse vs KISS

Object-oriented programming built on:

But reuse didn’t work - requirements for each program and the abstractions required differ.

Reuse is big design up front:

Unfortunately, determining the ‘correct’ design is impossible until implementation.

Situations when reuse does work:

Sapir-Whorf hypothesis/linguistic relativity: the structure of a language influences how you think. In programming terms: the programming paradigms we are used to influence our mindset and how we solve problems.

Reuse requires generic and abstract code/thinking:

KISS:

Design Principles

Encapsulation vs Information Hiding

Encapsulation is a tool to draw a border around a module.

Information hiding is a principle where you hide internal details from the outside world. This can be done using encapsulation.

This is used to hide what varies; anything that could be changed should be hidden (e.g. algorithm used for sorting).

Hence, argument and return types should be as high/generic as possible (eg. return Collection instead of ArrayList).

If a property or method is private, the type doesn’t matter as the type is encapsulated anyway.

Visibility, Access Levels, Modifiers

‘Never use public properties; use getters and setters instead’.

Getters and setters; two extreme viewpoints:

Coupling & Cohesion

Coupling: the extend to which two modules depend on each other.

Cohesion: how well the methods and properties within a module belong with each other.

Aim for high cohesion, low coupling.

Principle: keep data and behavior together (i.e. high cohesion).

The principle of separation of concerns separates data and behavior, but puts the related behaviors together.

The SOLID Principles

Single Responsibility Principle (SRP)

Each thing should only be in charge of one thing.

A responsibility = a reason for the module to change.

The SRP conflicts with the modeling of the real world, where objects usually do more than two things:

In addition, applying the SRP mindlessly can lead to:

Figuring out what the Single Responsibility should be can often be difficult?

Robert Martin’s thoughts on SRP:

…This principle is about people.

When you write a software module, you want to make sure that when changes are requested, those changes can only originate from a single person, or rather, a single tightly coupled group of people representing a single narrowly defined business function.

Imagine you took your car to a mechanic in order to fix a broken electric window. He calls you the next day saying it’s all fixed. When you pick up your car, you find the window works fine; but the car won’t start. It’s not likely you will return to that mechanic because he’s clearly an idiot.

Open/Closed Principle (OCP)

Modules should be open for extension, but closed for modification.

That is, you should be able to extend the behavior of an existing program without modifying it.

Interfaces are useful because they are an agreement that you will follow some defined behavior (for all public methods/properties); that is, Design-by-contract:

The open/closed principle forces abstractions and loose coupling and often requires dependency inversion.

Libraries and plug-in architectures are often good examples of OCP.

Can a program be fully closed? Probably not as this requires big design up-front.

Protected Variation: anything that is likely to change should be hidden and pushed downwards, with stable interfaces above/around them.

Liskov-Substitution Principle (LSP)

You should be able to change the subclass of an object without changing the behavior of the program i.e. design-by-contract: children adhere to their parent’s contract.

The LSP is not easy to implement and has no immediate benefits; rather, it gives long-term trust in modules.

Interface Segregation Principle (ISP)

Clients should not be forced to depend on interfaces/methods they will not use:

Martin Fowler’s original article.

Dependency Inversion Principle (DIP)

High-level modules should not depend on low-level modules: both should depend on abstractions/interfaces.

From this, the following follows:

Mostly taken for granted by the newer generation of programmers learning OO languages.

Common Closure Principle (CCP)

SRP at the package level: classes in a package should be closed together against the same kind of changes.

Common Reuse Principle (CRP)

Classes in a package are reused together: if you reuse one class, reuse all of them.

Classes being reused within the same context should be part of the same package.

e.g. Util package in Java.

Abstract Factory (AKA Kit Pattern)

Dependency inversion: client no longer needs to care about the specifics of the implementations.

Factories define an interface to instantiate new instances of a specific implementation of a class/interface, removing the need for a client to know the exact type being instantiated.

Hence, this is an example of dependency inversion as the client uses an interface to distance itself from the specific class and constructor being called.

An abstract factory takes this further by giving the factory interface methods to instantiate multiple related (and possibly dependent) objects.

The abstract factory keeps behavior, not data together.

Factory methods give looser coupling; details are (how the objects are instantiated) brought down to concrete classes, while interfaces are given to the higher layers (abstract classes)

The abstract factory is an example of parallel hierarchy: multiple hierarchies following the same structure. e.g.:

      Operator              Vehicle
   ______|______           ____|____
  ▽            ▽           ▽       ▽
Pilot        Cyclist     Plane    Bike

The factory method ensures the right operator is assigned to the vehicle. But what if you already have a specific operator you want to assign to the vehicle?

If you have a setOperator(Operator) method on the Vehicle interface, it defeats the point of the factory method. Rather, the concrete classes (Plane, Bike) must have setPilot(Pilot) and setCyclist(Cyclist) methods.

That is, go as high as you can in your hierarchy, but no further - there is no point raising it to the top if it means it fails to meet your requirements.

Stable Dependencies Principle (SDP)

Want stability; lack of changes, at the top of the hierarchy. See: hide what varies, contracts.

A module should depend on modules that are more stable than itself.

Maximum stability: if environment changes, module can’t change. Additionally requires big design up-front.

Should stability/instability be distributed across the entire program? No; some parts of the program will need to change frequently.

Stable Abstractions Principle (SAP)

A module should be as abstract as it is stable:

Tell, Don’t ask

Law of Demeter

If you have method M in object O, then M can call the methods of:

#noestimate

What does it mean?

Standard agile estimates story points to determine the number of stories done in the sprint and calculate their velocity.

#noestimate instead just completes tasks by priority and uses the tasks completed to calculate velocity. As the tasks are sliced vertically, the client gets a tangible end result at the end of each sprint.

So why estimate? The process (e.g. planning poker, discussion) is useful even if the estimates themselves are not.

Vertically slicing means:

Story mapping:

Class Debates

Always/Never Write Documentation

Always:

Never:

Always, counterpoints:

Never, counterpoints:

Collective vs Individual Code Ownership

Collective:

Individual:

Individual, counterpoints:

Collective, counterpoints:

03. Audits

Independent party verifying that the processes are being followed and the end product meets the requirements.

Formal software audits:

Less-formal software audits:

In SENG401, a less formal software audit will be done on SENG302 teams.

Software outcomes can be divided into two strands:

  1. Does it meet the acceptance criteria?
  1. Does it adhere to a process that increases the changes of success:

SENG302 Audit

Part 1: Report

Observe:

Can ask Moffat for summarized peer feedback/self-reflection, but not the full submissions.

Then, the audit report:

There must be evidence, ideally multiple factors that corroborate the conclusions drawn.

Part 2: Live Review

Diagnosis, prognosis, recommendation.

Talk to the team - the patient, professionally:

Prognosis:

Misc:

04. When Good Design Goes Bad

UML requires big design up-front, synchronization of diagrams and code. However, it is useful for communication.

What we’ve learned:

Design Erosion

AKA: architectural drift, software aging, architecture erosion, software decay, software rot, software entropy.

When the initial design becomes more and more obsolete:

Consequences:

Eventually, a replacement, rewrite, re-engineering or refactor becomes required.

So what to do when changes occur?

  1. Optimal design strategy
  1. Minimal effort strategy

‘Natural’ Rot

… the design of a software project is documented primarily by its source code

Robert C. Martin

To destroy an abandoned building, cut a hole in the roof and wait for it to rot from the inside out.

Software works the same way; without proper maintenance, a small hole can lead it to decaying from the inside.

Broken window theory: hacks in software normalize other hacks, leading to a spiraling descent in quality.

Symptoms of rot:

Preventing Rot

Address problems immediately:

Class Discussion: How Do Classical and Modern Processes Influence Design Erosion

Waterfall:

Agile:

Waterfall in business:

Design/Code Smells

NB: code smells can also refer to good smells.

An indication/symptom that something may be wrong: but does this mean it should be fixed? Two approaches:

Smells: Within Classes

Smells: Between Classes

Metrics

If you can’t measure it, you can’t improve it

Peter Drucker

You can’t control what you can’t measure

Tom DeMarco

Some context-dependent measure of a project, usually measured over time to track how the project is improving or getting worse. However, the context can change over time, making interpreting the metrics and making comparisons over time more difficult.

Benefits:

Dangers:

Difficulties:

Alternativ ways to identify code smells:

Measurements

McCabe’s Cyclomatic Complexity:

Chidmaber and Kemerer OO Metrics:

  1. Weighted methods per class (WMC)
  1. Depth of inheritance tree (DIT)
  1. Number of children (NOC)
  1. Coupling between objects (CBO)
  1. Response for class (RFC)
  1. Lack of cohesion in methods (LCOM):

TODO Lorenz and Kidd:

'Ar

Smells:

Other:

Refactoring

Refactoring will TODO

When to refactor?

TODO

Corerectness:

Rewrites:

Reengineering:

TODO

05. Standards

Standards

Groups of international experts sharing knowledge to develop solutions to common problems found in a range of activities.

Standards help with:

Some organizations:

Problems:

Understanding the standard does not mean you understand how to apply them/implement the solution or attain quality.

One of UC’s (5th?) student management system:

Quality and Service Standards

IEEE Standard 1012: system, software, and hardware verification and validation.

Determines if requirements are:

4 integrity levels.

Verification and Validation (V & V)

Comparison:

Verification:

Validation:

Software integrity level (SIL)

  1. Negligible consequences if element fails - mitigation not required
  2. Minor consequences if element fails - complete mitigation possible
  3. Serious consequences if element fails
  1. Grave consequences - no mitigation possible

NASA matrix:

Consequences
     ^
     | SIL3      SIL4
     |
     | SIL1      SIL2
     --------------->
      Error Potential

Engineering V Model

Waterfall-type development lifecycle. On the left, the requirements and design are verified and validated; on the right, the system is verified and validated.

Stakeholder  <------------------------ User acceptance
requirements        Validates            testing
     \                                      ^
      \                                    /
       v             Verifies             /
      System   <---------------- System integration
    requirements                      testing
         \                              ^
          \                            /
           v         Verifies         /
       Subsystem  <-----------  Integration
      requirements               Testing
             \                     ^
              \                   /
               v     Verifies    /
      Unit/component <-------  Unit
        requirements         testing
                  \            ^
                   \          /
                    v        /
                   Development

Task: SENG302 Verification and Validation

Verification (remember; verification requires artifacts that the assessor can view):

Did we have enough work products/artifacts to validate?

Design decisions: if not recorded, harder to validate/verify

Validation:

As a customer to a SENG302 team, how does the team give confidence that they can deliver?

Problem: a large number of software projects fail. Why?

How can a software company ensure high-quality, low failure, high predictability and consistency?

Capability Maturity Model

Military software contracts were often late, failed, or went over-budget. The US DoD Software Engineering Institute developed the capability maturity model to quantify how mature a software business and assess its practices, process and behaviors.

Five aspects of CMM:

  1. Maturity levels
  2. Key process areas (KPA):
  1. Goals:
  1. Common features:
  1. Key practices

Maturity levels:

  1. Initial:
  1. Repeatable:
  1. Defined:
  1. Managed (capable):
  1. Optimizing (efficient):
Capability Maturity Model Integrity (CMMI)

Successor to CMM by the Carnegie Mellon University.

Focuses more on results rather than activities when compared to the CMM.

CMM is based heavily on the paper trail; CMMI focused more on strategy (but still a lot of paper/documentation).

  1. Initial
  2. Managed
  1. Defined
  1. Quantitatively managed
  1. Optimized

Models for:

Appraisals:

Pros and Cons

Pros:

Cons:

Immaturity Models

Businesses that are below level 1: anti-patterns to avoid.

06. Testing

Testing strategies != testing

Debate: developers should not test their own code/program

Developers should develop, testers should test

Negative: developers should develop and test

Positive:

Negative:

Counterpoints against positive:

Counterpoints against negative:

Quality

Quality is created by the developer - so what is testing for?

Testing isn’t about unit testing or integration testing. It is the mindset; a systematic process of:

Testing is about how a user experiences the system and how it compares to our expectations.

In what contexts is testing not required?

Hypothesis Testing

The broad steps:

Example:

Verifiability vs Falsifiability

What will it take for us to be able to claim that there are no bugs in the system?

You must test every conceivable avenue and every single branch; verify the system. This is almost impossible, although formal proofs are possible in limited domains.

Karl Popper - The Logic of Scientific Discovery, 1934.

Verifiability: every single branch can be tested

Falsifiability: at least one example that contradicts the hypothesis can be found

Hence, there is a large asymmetry between the two: when making scientific hypotheses, we find evidence to support or disprove the hypothesis but we can never prove the hypothesis is true.

Testing vs. Automation

Automations help with making the testing process easier; it is not testing itself.

Testing is the human process of thinking about how to verify/falsify.

Testing is done in context; humans must intelligently evaluate the results taking this into account.

Biases

Confirmation Bias

The tendency to interpret information in a manner that confirms your own beliefs:

Congruence Bias

Subset of confirmation bias, in which people over-rely on their initial hypothesis and neglect to consider alternatives (which may indirectly test the hypothesis).

In testing, this occurs if the tester has strategies that they use all the time and do not consider alternative approaches.

Anchoring Bias

Once a baseline is provided, people unconsciously it as a reference point.

Irrelevant information affects the decision making/testing process.

The tester is already anchored in what the system does, perhaps from docs, user stories, talks with management etc. and not consider alternate branches.

Functional fixedness: a tendency to only test in the way the system is meant to be used and not think laterally.

Law of the Instrument Bias

Believing and relying on an instrument to a fault.

Reliance on the testing tool/methodology e.g. acceptance/unit/integration testing: we use x therefore y must be true.

The way the language is written can affect it as well. e.g. the constrained syntax of user stories leads to complex information and constraints being compressed and relevant information being lost.

Resemblance Bias

The toy duck looks like a duck so it must act like a duck: judging a situation based on a similar previous situation

e.g. if you have experience in a similar framework, you may make assumptions about how the current framework works based on your prior experience. This may lead to ‘obvious’ things being missed or mistaken.

Halo Effect Bias

Brilliant people/organizations never make mistakes. Hence, their work does not need to be tested (or this bug I found is a feature, not a bug).

Authoritative Bias

Types of Testing Techniques

Static testing:

Dynamic testing:

Scripted vs unscripted tests; compared to to unscripted tests, scripted tests:

Testing Toolbox

Three main classes:

Unit testing:

Integration testing:

System testing:

Smoke testing:

Sanity testing:

Regression testing:

Acceptance testing:

End-to-end testing:

Security testing:

Test/Behavior Driven Development (TDD/BDD)

Development, NOT testing strategies.

Tests made in this process are prototypes and hence they .

TDD tests are blue-sky, verification tests rather than falsifiability tests. Additionally, they are prototypes and hence, TDD tests should (in theory) be thrown away and rewritten (sunk-cost fallacy).

Audits

How will you test the system?

Look at the tests, not the techniques.

James Bach - The Test Design Starting Line: Hypotheses - Keynote PeakIT004

Testing Certifications

Standards:

International software testing qualifications board (ISTQB):

In the exam:

ISO/IEC/IEEE 29119-4 Test Techniques

Split into three different high-level types:

Specification

Equivalence Class Partitioning (ECP)

Partition test conditions, usually inputs, into sets: equivalence partitions/classes. Be careful of sub-partitions.

Only one test per partition is required.

e.g. alphabetical characters, alphanumeric, ASCII, emoji, SQL injection.

e.g. square root function could have num >= 0, int <= 0, float <= 0 equivalence classes

Classification Tree Method

Grimm/Grochtmann, 1993:

e.g. DBMS:

Boundary Value Analysis

Test along the boundary:

Syntax Testing

Tests the language’s grammar by testing the syntax of all inputs in the input domain.

Requires a very large number of tests. Usually automated and may use a pre-processor.

Note that a correct syntax does not mean correct functionality.

Process:

Combinatorial Test Techniques

When there are several parameters/variables. TODO

Reduce the test space using other techniques:

Decision Table Testing

AKA cause-effect table testing

Software makes different decisions based on a variety of factors:

Decision table testing tests decision paths: different outputs triggered by the above conditions.

Decisions tables help to document complex logic and business rules. They have CONDITIONS (e.g. user logged in or not) and ACTIONS that are run when the conditions are met (that are run by the user and/or system).

Cause-Effect Graphs

AKA Ishikawa diagram, fish bone diagram.

Document dependencies.

Syntax:

Example

If the user clicking the ‘save’ button is an administrator or a moderator, then they are allowed to save. When the 'save” button is clicked, it should call the ‘save’ functionality.

If the user is not an admin or moderator, then the message in the troubleshooter/CLI should say so.

If the ‘save’ functionality is not hooked up to the ‘save’ button, then there should be a message about this when the button is clicked.

C1: the user is an admin C2: the user is a moderator C3: the save functionality is called

E1: the information is saved E2: the message ‘you need to be an authenticated user’ E3: the massage ‘the save functionality has not been called’

c1
   \    ----~--- e3
    \  /
 v ( > --------- e1
    /         ^(/
               /
c2 /      ____/
      ___/
c3 _/___________ e2

More complex diagrams should use fishbone diagrams.

State Transition Graphs
Scenario Testing

Scenarios are a sequence of interactions (between systems, users etc.).

Scenarios should be credible and replicate an end-user’s experience. They should be based off of a story/description.

Scenario tests test the end-to-end functionality and business flows, both blue-sky and error cases. However, scenario tests should not need to be exhaustive - these are expensive and heavily-documented tests.

Scenario tests also test usability from the user’s perspective, not just business requirements.

Random/Monkey Testing

Using random input to test; used when the time required to write and run the directed test is too long, too complex or impossible.

Heuristics could be used to generate tests, but care should be taken to ensure there is still sufficient randomness as to cover the specification.

There needs to be some sort of mechanism to determine when a test fails and the ability to be able to reproduce the failing test.

Monkey testing useful to prevent tunnel vision and when you cannot think laterally.

Structure-Based Techniques

Structure and data.

Statement Testing

AKA line/segment coverage.

Test checks/verifies each line of code and the flow of different paths in the program.

Conditions that are always false cannot be tested.

Similar to BVA except it is focused more on the paths rather than the input.

Branch/Decision Testing

Test each branch where decisions are made.

Branch coverage:

All branches are validated.

Data Flow Testing

Test for data flows, detects improper use of data in a program such as:

It creates a control flow graph and a data flow graph; the latter represents data dependencies between operations.

Static data flow testing analyzes source code without executing it, which dynamic data flow testing does the analysis during execution.

(e.g. data just passing though a class without being used directly by it?).

Experience-based Testing

Error guessing: get a experienced tester to think of situations that may break the program.

Error guessing:

07. Project Management

Industry compared to SENG302:

SENG302 teaches you an ideal way that software development should work, which businesses which may not follow for efficiency reasons.

Exercise: Design a Software Methodology

Exercise: design and justify a software methodology that will replace current agile methodologies:

Use limited subset: Assume: Company building project with its own PO (not an agency building products for an external customers):

Other teams:

Analysis:

Project Management

Product manager:

Three M’s (wasteful actions)

Used in Lean

  1. Muda
  1. Mura
  1. Muri

Waterfall

There is no development method called ‘waterfall’: it is an umbrella term.

Software development life cycle (SDLC):

Waterfall can be iterative: a lot of overhead.

Why use waterfall?

Waterfall TOOD:

Agile is very expensive:

Extreme Programming (XP):

Agile principles:

Lean:

Scrum:

Kanban:

Scrumban:

Project Managment Certifications:

08. Riel’s Heuristics

Arthur Riel, 1996. 60 guidelines/heuristics for OO programming

Hide data within its class:

A class should not depend on its users:

Minimize the number of public methods:

Have a minimal public interface:

Avoid interface bloat:

Avoid interface pollution:

Nil or export-coupling only:

One key abstraction:

Keep related data and behavior in one place:

Separate non-communicating behavior:

Model classes, not roles:

Distribute system intelligence:

Avoid God classes:

Beware of many accessors:

Beware of non-communicating methods:

Interfaces should be dependent on the model:

Model the real world:

Eliminate irrelevant classes

Avoid verb classes:

Agent classes irrelevant:

Minimize class collaborations:

Minimize method collaborations:

Minimize methods between collaborators:

Minimize fan-out:

Containment implies uses:

Methods should use most fields of a class:

Limit compositions in a class:

Contain contents, not parents:

Contained objects should not be able to use each other:

09. Recap

Not assessed.

Accreditation of Qualifications:

WA graduate attributes:

Lifelong learning:

SENG401:

Standards

Quality What Why do we care? Difference in context = difference in strategy Testing strategy Testing techniques Not just automated testing Audit Observing a real team Data, analysis, interpretation Processes, code metrics Objective/subjective analyses: hard/soft V&V Live review: beside manner todo

What was the purpose of the audit? Different, unknown technology used by the 2022 projects Seeing them make the same mistake we did: reflection Team issues Team cohesion: sub-groups or working alone Step 1: awareness of issues Ignoring the root issue Importance of setting and following rules How different personality types interact with each other Differing expectations/goals between team members Recency effect: looking back on start of SENG302

What was the purpose of assignment 2:

Exam

Quality without name? Objective or subjective quality?

Christopher Alexander: young professionals: acceptance of standards that are too low.