01. Introduction
Weighting:
- Assignment 1: 25%
- Assignment 2: 25%
- Exam: 50%
- Open book, 24 hrs
- Essay-style answers
4th year:
- Much more reading/writing
- Now seniors: power level now much closer to lecturer, third years look up to us
a ‘Senior’ engineer should have:
- Critical thinking skills
- Collaboration
- Communication
- Emotional intelligence
- Leadership
- Learning
- Time management
- Self-awareness
- Taking the initiative
- Interacting with clients
- Gold-plating: (offering to) add stuff that the clients don’t need or want - scope creep
- Negotiation
What does quality mean?
- Stakeholder satisfaction?
- Should quality change depending on who is looking at it?
- If clients do things the software was not designed to do, does that make the software bad?
- Modularity?
- Supposedly allows independence (see mocking)
- Amount of independence decreases the more modular the software is - at some point the costs outweigh the benefits
- Reliability?
- Every minute of downtime costs hundreds or thousands of dollars and can severely impacts the reputation of the service
- 787 MAX: bug that occurred after 8 months of continuous run-time. How do you catch such a bug?
- Ariane 5’s initial flight: reused software from previous rockets but the new hardware changed assumptions
- “If it ain’t broke, don’t fix it”
SENG401 is about critical thinking; careful consideration problems, and recommendations supported with justifications and evidence.
Informal debates with the class. The class split into two; each side takes extreme positions (makes it hard to defend, requires examples).
Debate: “You should always/never document code”.
02. Principles
Controversial topic debates: defending extreme viewpoints is difficult and requires research to justify the viewpoint.
Software engineering principles not black and white:
- Contradicting principles
- Dependent on context, priorities, constraints, requirements
- Software is abstract; unlike other engineering areas, there are no laws of physics; we create the rules
Technical Debt
Design decisions made in the past under circumstances that are no longer relevant Conscious, un-ideal decisions made in the past that must be corrected
Ward Cunningham, 1992: quick and easy approach that comes with interest - additional work that must be done in the future: the longer you wait, the more code relies on the debt - interest grows over time.
Design stamina hypothesis (Martin Fowler): in a time-functionality graph:
- Good design is linear
- Bad/no design starts of faster, but drops off over time
- Where the lines meet is the design payoff line
Types of Technical Debt
Two axes: deliberate/inadvertent, and reckless/prudent
- Deliberate-reckless: don’t have time to design
- Deliberate-prudent: must ship now and deal with the consequences
- The ‘best’ type of technical debt
- Inadvertent-reckless: not understanding the technical debt in the project
- The worst type: don’t even know you are accruing technical debt
- Inadvertent-prudent: understanding the accrued technical debt only after writing it
Reasons for Debt
- Time: deadlines
- Faster time to market may lead to increased short-/long-term budget
- Prototypes: usually end up being part of the shipping product even though they should be thrown out once they are done
- Money: budget constraints
- Interest must be paid eventually though
- Knowledge/experience
Caused by:
- Change in business decisions
- Market changes
- Scope changes/creep
- The scrum master should:
- Be the interface between the development team and the rest of the world: marketing, management, etc.
- Never be part of the development team; they should be part of the management so that they have the authority to protect the team and say ‘no’ to the needs/wants of others
- In reality, it is cheaper to have them part of the team
- The scrum master should:
- Resourcing changes
- Poor management
- Inexperienced team
Measuring Technical Debt
302: a lot of debt by the end of the year.
Measure how much technical debt there is by:
- Checking how much refactoring is being done
- Measuring sprint velocity
- This must be done from the start
Types of debt:
- Deliberate technical debt can easily be measured by documenting the debt how much time it would take to pay it off
- Inadvertent technical debt is more difficult to measure
- Debt from third-party libraries is inadvertent, reckless technical debt (although we can get a rough estimate of debt using metrics like the number of open issues)
Interest rates:
- If the interest rate is high, it will only be used in extreme circumstances
- e.g. preparing for demo for VC funding
- For critical applications (e.g. planes, banks), technical debt is extremely expensive
- Sometimes, the debt never needs to be paid off:
- Prototypes
- When you won’t build on top of the debted code
- Programs with short lifespans - e.g. for short advertising campaign
- Firmware - will rarely be updated
- But when a new version of the hardware is released, it will likely be reused
Positive/negative value, visible/invisible attributes:
- Visible, positive: feature
- Visible, negative: bug
- Invisible, positive: architecture
- Invisible, negative: technical debt
Pick a process/framework (Scrum/Kanban/Waterfall): Which part is devoted to Technical Debt correction/payment?
Fan-in vs Fan-out
- Fan-in:
- Number of direct dependents
- Utility functions should have a high fan-in
- the larger the fan in, the more stuff breaks when the module breaks
- Fan-out:
- Number of direct dependencies
- Initialization function will likely have a very high fan-out
- Any dependency breaking may break the entire module
Refactor vs Re-engineering
- Refactor: rewrite that does not change the module’s external interface (‘refactor’ in IDEs change method names and hence signatures, so it isn’t actually a refactor)
- Re-engineering: rewrite that changes the interface and hence requires dependents to be updated
Hence, refactorings should be done as-you-go while re-engineerings should be done infrequently and only after careful planning.
Reuse vs KISS
Object-oriented programming built on:
- Code reuse
- Opportunistic: create reusable modules/methods as you go
- Internal/external: when you use external libraries, you take on their technical debt as well
- Planned/strategic: create modules/methods in preparation for plans
- Opportunistic: create reusable modules/methods as you go
- Modeling the real world
But reuse didn’t work - requirements for each program and the abstractions required differ.
Reuse is big design up front:
- Waterfall: objects and entities must be designed
- ‘Just in case’ planning
- Generalized utility functions
- More planning/analyses = cost savings (if you get it right)
- Bugs easier to find and fix
Unfortunately, determining the ‘correct’ design is impossible until implementation.
Situations when reuse does work:
- Design patterns (not code)
- Libraries (module with methods that you can call)
- Frameworks (frameworks call your code)
Sapir-Whorf hypothesis/linguistic relativity: the structure of a language influences how you think. In programming terms: the programming paradigms we are used to influence our mindset and how we solve problems.
Reuse requires generic and abstract code/thinking:
- Abstraction: extracting commonalities between similar classes/objects to create more generic class
- Extensibility:
- Open-closed principle
- Minimize impacts of future changes on existing classes
- Generic/abstract classes/packages
- Planning for future ‘maybes’
- Sometimes may be over-engineering
- Future needs may change
KISS:
- You Ain’t Gonna Need It (YAGNI)
- Do not implement until needed
- Do not try and predict the future
- Do not over-engineer
- Leads to:
- Reduced cognitive load
- 7 ± 2: experts could hold much more as they would ‘chunk’ multiple items together
- Fewer bugs
- Dead code: code whose results are not used (but may cause exceptions and hence still have some use)
- Unreachable code: branches that will never be executed
- Reduced cognitive load
- Is technical debt good?
- The simplest solution may be the best solution
- Less code, fewer bugs
- Reduced complexity
- Possibly faster
- Makes testing easier - fewer inputs and branches
- Possibly faster to develop - however, finding the simple/elegant solution to a problem is often difficult
- Refactoring:
- Keep it simple; continually refactor and extend the code as required
Design Principles
Encapsulation vs Information Hiding
Encapsulation is a tool to draw a border around a module.
Information hiding is a principle where you hide internal details from the outside world. This can be done using encapsulation.
This is used to hide what varies; anything that could be changed should be hidden (e.g. algorithm used for sorting).
Hence, argument and return types should be as high/generic as possible (eg. return Collection instead of ArrayList).
If a property or method is private, the type doesn’t matter as the type is encapsulated anyway.
Visibility, Access Levels, Modifiers
‘Never use public properties; use getters and setters instead’.
Getters and setters; two extreme viewpoints:
- Getters should never be used:
- Tell, don’t ask: the class should have methods that modify the properties; other classes should never modify the property directly
- e.g.
Breadclass should have atoastmethod instead of aToasterclass toasting the bread, with the Bread class and its sub-classes implementing aToastableinterface
- Getters should always be used:
- Everything done inside the class should also always go through the getters/setters
- This allows the property type to be changed without affecting the rest of the class: a secondary level of encapsulation
- NB: if you simply return an object, the object could be modified and hence, removing the point of the getter
- Hence, either return a copy or wrap it around an unmodifiable element (e.g.
List.unmodifiableListin Java).
- Hence, either return a copy or wrap it around an unmodifiable element (e.g.
- Everything done inside the class should also always go through the getters/setters
Coupling & Cohesion
Coupling: the extend to which two modules depend on each other.
Cohesion: how well the methods and properties within a module belong with each other.
Aim for high cohesion, low coupling.
Principle: keep data and behavior together (i.e. high cohesion).
The principle of separation of concerns separates data and behavior, but puts the related behaviors together.
The SOLID Principles
Single Responsibility Principle (SRP)
Each thing should only be in charge of one thing.
A responsibility = a reason for the module to change.
The SRP conflicts with the modeling of the real world, where objects usually do more than two things:
- e.g. a modem does multiple things: it dials/hangups, and sends/receives data
- Having both of these roles within a single interface violates the SRP
In addition, applying the SRP mindlessly can lead to:
- Increased coupling and needless complexity
- Getting all the data you need may require it to pass through multiple middlemen if the data is spread too thin
- But at the same time, having all the data together can lead to a god class
- Getting all the data you need may require it to pass through multiple middlemen if the data is spread too thin
- Difficulty in on-boarding new team members or understanding how to architect the program
- Code fragmentation and broken/leaky encapsulation
Figuring out what the Single Responsibility should be can often be difficult?
Robert Martin’s thoughts on SRP:
…This principle is about people.
When you write a software module, you want to make sure that when changes are requested, those changes can only originate from a single person, or rather, a single tightly coupled group of people representing a single narrowly defined business function.
Imagine you took your car to a mechanic in order to fix a broken electric window. He calls you the next day saying it’s all fixed. When you pick up your car, you find the window works fine; but the car won’t start. It’s not likely you will return to that mechanic because he’s clearly an idiot.
Open/Closed Principle (OCP)
Modules should be open for extension, but closed for modification.
That is, you should be able to extend the behavior of an existing program without modifying it.
Interfaces are useful because they are an agreement that you will follow some defined behavior (for all public methods/properties); that is, Design-by-contract:
- Pre-conditions: entry conditions that the client must ensure are met
- Post-conditions: obligations by the service that must be true when the service method exits
- Invariants: properties that are guaranteed to be maintained
- All children must abide by their parent’s contract: they can loosen pre-conditions and tighten post-conditions, but not vice-versa
- e.g. Java Collection interface’s
addmethod returns a boolean: whether or not the collection has been modified as a result of the operation
- e.g. Java Collection interface’s
The open/closed principle forces abstractions and loose coupling and often requires dependency inversion.
Libraries and plug-in architectures are often good examples of OCP.
Can a program be fully closed? Probably not as this requires big design up-front.
Protected Variation: anything that is likely to change should be hidden and pushed downwards, with stable interfaces above/around them.
Liskov-Substitution Principle (LSP)
You should be able to change the subclass of an object without changing the behavior of the program i.e. design-by-contract: children adhere to their parent’s contract.
The LSP is not easy to implement and has no immediate benefits; rather, it gives long-term trust in modules.
Interface Segregation Principle (ISP)
Clients should not be forced to depend on interfaces/methods they will not use:
- Many specific interfaces over one generic one
- Avoids interface pollution
- Classes should not be forced to implement irrelevant methods
Martin Fowler’s original article.
Dependency Inversion Principle (DIP)
High-level modules should not depend on low-level modules: both should depend on abstractions/interfaces.
From this, the following follows:
- Abstractions should never depend on details
- Code should depend on things that are at the same or higher level of abstraction
- High-level policy should not depend on low-level details
- Low-level details can change
- Low-level dependencies should be captured in low-level abstractions
Mostly taken for granted by the newer generation of programmers learning OO languages.
Common Closure Principle (CCP)
SRP at the package level: classes in a package should be closed together against the same kind of changes.
- Highly-coupled classes should be grouped together in a package
- With the end result of increasing cohesion at the class level
- What affects one affects all
Common Reuse Principle (CRP)
Classes in a package are reused together: if you reuse one class, reuse all of them.
Classes being reused within the same context should be part of the same package.
e.g. Util package in Java.
Abstract Factory (AKA Kit Pattern)
Dependency inversion: client no longer needs to care about the specifics of the implementations.
Factories define an interface to instantiate new instances of a specific implementation of a class/interface, removing the need for a client to know the exact type being instantiated.
Hence, this is an example of dependency inversion as the client uses an interface to distance itself from the specific class and constructor being called.
An abstract factory takes this further by giving the factory interface methods to instantiate multiple related (and possibly dependent) objects.
The abstract factory keeps behavior, not data together.
Factory methods give looser coupling; details are (how the objects are instantiated) brought down to concrete classes, while interfaces are given to the higher layers (abstract classes)
The abstract factory is an example of parallel hierarchy: multiple hierarchies following the same structure. e.g.:
Operator Vehicle
______|______ ____|____
▽ ▽ ▽ ▽
Pilot Cyclist Plane Bike
The factory method ensures the right operator is assigned to the vehicle. But what if you already have a specific operator you want to assign to the vehicle?
If you have a setOperator(Operator) method on the Vehicle interface, it defeats the point of the factory method. Rather, the concrete classes (Plane, Bike) must have setPilot(Pilot) and setCyclist(Cyclist) methods.
That is, go as high as you can in your hierarchy, but no further - there is no point raising it to the top if it means it fails to meet your requirements.
Stable Dependencies Principle (SDP)
Want stability; lack of changes, at the top of the hierarchy. See: hide what varies, contracts.
A module should depend on modules that are more stable than itself.
Maximum stability: if environment changes, module can’t change. Additionally requires big design up-front.
Should stability/instability be distributed across the entire program? No; some parts of the program will need to change frequently.
Stable Abstractions Principle (SAP)
A module should be as abstract as it is stable:
- Concepts should be stable and abstract; real-world objects should be more concrete
- Unstable classes should be concrete
- Changes should be made on concrete classes
- Maintainability: TODO
- Extendability: open-closed principle
Tell, Don’t ask
Law of Demeter
If you have method M in object O, then M can call the methods of:
- O
- O’s direct component objects
- Two-dot properties/methods (e.g.
a.b.c()) increases complexity/difficulty in understanding the code - Try add method to
awhich callsb.c()if possible - This message-chaining is a code smell
- Confidence:
- Confident about yourself
- Less confident about your friend
- Low confidence about your friend’s friends
- Two-dot properties/methods (e.g.
- M’s parameters
- Objects instantiated inside M
#noestimate
What does it mean?
- Estimating is a waste of time; just do the work
- Estimates aren’t useful; tasks take as long as they will take
- Clients get unhappy when estimates are not met
- Complexity of work means estimates are unreliable
Standard agile estimates story points to determine the number of stories done in the sprint and calculate their velocity.
#noestimate instead just completes tasks by priority and uses the tasks completed to calculate velocity. As the tasks are sliced vertically, the client gets a tangible end result at the end of each sprint.
So why estimate? The process (e.g. planning poker, discussion) is useful even if the estimates themselves are not.
Vertically slicing means:
- Even if the project is stopped prematurely, there is something to deliver
- Requirements can change; if customer realizes they did not want what they asked for, or the situation has changed, you can change future stories
- Stories may get quite large:
- MVP: do the minimum required to get the story working
- Downside: re-engineering may be required in the future
- Alternatively, talk to the PO and do things ‘right’ if you are certain you will need certain functionality later
- MVP: do the minimum required to get the story working
Story mapping:
- Epics: giant stories
- Split the epic into stories
- Prioritize stories such that useful functionality is delivered each sprint
- Walking skeleton: when multiple epics are done simultaneously such that the skeleton of the epics slowly comes together each sprint
Class Debates
Always/Never Write Documentation
Always:
- On-boarding/knowledge transfer
- Justifying design decisions
- Large codebases, makes navigability easier (usability)
Never:
- Documentation always lags behind; old documentation can be counter-productive
- Reading the code can be more useful than reading documentation
- Documentation can be an excuse for bad/complex code
Always, counterpoints:
- Complexity of code matches complexity of the problem space: code describes how, not why
- Documentation: high-level overview
- Technical debt: code will not be perfect; need documentation to explain what needs to be changed
- Documentation should be written before the code (like TDD) - in this case, documentation will always be updated (like TDD)
- On-boarding new technologies
Never, counterpoints:
- Bob Martin: “A comment is a failure to express yourself in code. If you fail, then write a comment; but try not to fail”
- Documentation not being updated still remains an additional risk
- There should always be an obvious solution
- Code too complicated to be understandable: can always be simplified to a point where the code is self-explanatory
- Grady Booch: “Clean code is simple and direct. Clean code reads like well-written prose. Clean code never obscures the designer’s intent but rather is full of crisp abstractions and straightforward lines of control.”
Collective vs Individual Code Ownership
Collective:
- High bus factor: person exiting the company will not
- Allows a cross-functional team
- Reduces knowledge siloing
- Improves review process: more people familiar with the code base
- Reduces level of pre-planning required
- Stops the blame game; share the blame and reward; the process, not the individual, had issues.
- When person leaves the project, who remains responsible for the code?
- Less communication overhead - do not need to talk to specific person in order to make improvements/bug fixes
- http://www.extremeprogramming.org/rules/collective.html
Individual:
- Does not mean siloing; means accountability
- Ensures there is always an expert for any part of the code base
- Allows specialization; more efficient distribution of labor
- Code reviews: fresh set of eyes better for finding issues, bugs
- Higher standard of quality when your name is attached to your code
Individual, counterpoints:
- Code ownership required in small company; there needed to be a domain expert??
- Collective ownership means completely anonymous code and no accountability
- Allows more even split over workload
- Reduces risk of merge conflicts as there should only be one person working in each area
- Too many cooks
- Code creator will always have better understanding of the code - know who to talk to
Collective, counterpoints:
- Code reviews a form of collective ownership
- Responsibility and ownership are different: you are still responsible for quality, tests, etc. of the code you wrote
- Individual ownership often becomes ‘your code is broken, fix it’
- If owner on vacation, you get stuck
- If your own code and no one else can work on it, why bother documenting?
- Merge requests: even if there is individual ownership there are still changes that require multiple modules to be updated in tandem
03. Audits
Independent party verifying that the processes are being followed and the end product meets the requirements.
Formal software audits:
- Uses formal methods and mathematically functions to prove correctness
- Requires code freeze
- Takes a lot of time and resources
- Useful for critical systems - where there are legal standards (e.g. health, financial)
Less-formal software audits:
- Does the software:
- Have good quality
- Do what the customer wants
- Attempts to do this with a much lower cost and time
- Lower confidence and degree of correctness, but may be enough
- Development can continue
In SENG401, a less formal software audit will be done on SENG302 teams.
Software outcomes can be divided into two strands:
- Does it meet the acceptance criteria?
- Explicit, implicit, and non-functional requirements
- Does it adhere to a process that increases the changes of success:
- Development/quality process
- Examine work artifacts: backlog, scrum board, walkthroughs, logs, test plans/protocols
- Critique team processes
- Resourcing: is there enough staff?
- Examine training/on-boarding processes
SENG302 Audit
Part 1: Report
Observe:
- Potentially shippable products; check the ACs
- The codebase:
- Architecture
- Metrics
- Code smells
- Extensibility
- Amount of over-engineering, gold-plating
- Amount of re-engineering
- Testing and metrics
- And how it changes over time
- Bugs and issues
- Commit history
- The process:
- Scrum board
- Team processes (e.g. estimation)
- Code review, merge request process
- Wiki
- Teamwork:
- Observe how the teams work
- Tuckman model
- Informal chats
- Pair-programming, co-location
- Workload distribution
- Culture
- Bike shedding: spending time on trivial matters
- Group think
- Hero culture: one person doing all the work
- Death march: doing everything at the last minute all while knowing that you will not meet the deadline
- Not communicating with the PO due to the power-level difference and PO not recognizing this
- Personalities
- Communication with:
- Teaching staff
- Other teams
- PO, SM
Can ask Moffat for summarized peer feedback/self-reflection, but not the full submissions.
Then, the audit report:
- Diagnosis: what is the current state of the team
- Prognosis: what is the future state of the team
- Recommendations: what do you recommend to improve the team, and what will their future state look like if these recommendations are followed
There must be evidence, ideally multiple factors that corroborate the conclusions drawn.
Part 2: Live Review
Diagnosis, prognosis, recommendation.
Talk to the team - the patient, professionally:
- With respect
- With empathy
- Sitting down - standing up emphasizes power-difference
- Arrange the room so that you are talking to the team, not the audience
- Don’t present; talk and discuss with them
Prognosis:
- Long-term future with the current state of affairs
- Long-term future if recommendations are followed
Misc:
- Balance out both positives and negatives to not overwhelm the ‘patient’
- Tell them that we, not them, are the ones being assessed
- Place feedback in the context of industry, not the course
- e.g. ‘log or else Moffat will get angry’; that only helps them pass the course
- No identifying information - no naming and shaming. anonymize graphs
04. When Good Design Goes Bad
UML requires big design up-front, synchronization of diagrams and code. However, it is useful for communication.
What we’ve learned:
- 201: Design is important and achievable
- 301: Quality is important and achievable
- 302:
- Many things are important:
- Business decisions
- Time/cost/resources
- Processes
- The team is important
- The individual is important
- The customer is important
- Priorities
- Everything
- All are achievable, but not by us
- Many things are important:
- 401:
- What is design?
- What is quality?
- What is process?
- Are any of these achievable?
Design Erosion
AKA: architectural drift, software aging, architecture erosion, software decay, software rot, software entropy.
When the initial design becomes more and more obsolete:
- The world changes, requiring the design to change:
- New requirements, incremental
- Technical debt
- Hacks during rush jobs
- Bug fixes/patches, local corrections
- Changes in the business environment/strategy
- Solving different problems that the architecture is not capable of meeting
- Adoption of new solutions/technologies
- People change
- Vaporized design decisions
- Original designers left
- Undocumented design decisions
- Documents not followed
- Quick fixes
- Inexperienced teams (new programmers and/or new teams)
- Iterative methods/practices
- Time/cost pressures
- Not enough language support
Consequences:
- Accumulation of sub-optimal design solutions, leading to further design erosion
- Increased time/cost to add/modify/fix bugs or features
- Workarounds, ad-hoc approaches, trying to fix the symptom instead of the problem
- Fixing one bug makes two more, fixing those causes a hundred more
- Negative impact on development
- Deployment problems
- A large and growing number of bugs/issues
- Spiraling descent towards worsening design
Eventually, a replacement, rewrite, re-engineering or refactor becomes required.
So what to do when changes occur?
- Optimal design strategy
- No-compromise in design
- High local/immediate costs
- Minimal effort strategy
- ‘Stretching’ design rules
- Could help with cost/time constraints
‘Natural’ Rot
… the design of a software project is documented primarily by its source code
Robert C. Martin
To destroy an abandoned building, cut a hole in the roof and wait for it to rot from the inside out.
Software works the same way; without proper maintenance, a small hole can lead it to decaying from the inside.
Broken window theory: hacks in software normalize other hacks, leading to a spiraling descent in quality.
Symptoms of rot:
- Rigidity:
- Design resists change, even for simple changes
- Changes cause cascading changes; high coupling
- Code built on top of rigid code becomes more rigid
- Leads to:
- Fear of letting developers fix non-critical problems
- Unknown time necessary to make fixes or changes
- “Official rigidity”: management is not flexible, leading to inflexible teams and inflexible to design deficiencies
- Fragility:
- A single change can break things in multiple places with (seemingly) no relationship to the changed area
- Leads to:
- Fear of making fixes and breaking more things, or of having no idea where the underlying issue is
- Increase in fragility; fixes are minimal and likely add more fragility
- Immobility:
- Difficult to get re-usable components - a tangled and highly-coupled mess
- Component dependent on too much baggage
- Leads to:
- Rewrites instead of reuse
- Copy/paste programming
- Multiple points of failure
- Viscosity:
- Software viscosity:
- Easier to add a hack than to conform to the design
- Leads to increased rot speed
- Environmental viscosity:
- Slow development environment
- Long compile, check-in, and deployment times
- Leads to:
- Skipping processes to speed things up
- Hacks, merge conflicts, bad designs
- Software viscosity:
- Needless repetition::
- YAGNI
- Business rules not implemented in a clear and distinct manner
- Copy pasting
- Leads to:
- Reduced trust in the software, team, management, company
- Reduced comprehension of code
- Rigidity: changes need to be implemented in many places
- Opacity:
- How understandable is the code?
- Code that evolves over time becomes more difficult to understand
- Leads to:
- Reduced comprehension of code and business rules
- Fragility
- Rigidity
- Needless complexity:
- Due to the developer looking out for future extensions
- May lead to a lot of additional classes that must be maintained and dead code (results not used)
- Leads to:
- Over-engineering
- Other rot symptoms
- Due to the developer looking out for future extensions
Preventing Rot
Address problems immediately:
- Broken window theory: sloppy code invites more sloppy code
- Patch the holes; may take a lot of effort and require you to convince management (the hole that you made)
- Determine the root cause; don’t just fix the symptoms:
- Developers should discuss and have a meeting
- Code reviews
- Do not allow even one hole or broken window
- Very difficult to achieve
- Fix the issues the moment you find them - requires open communication with PO
- Add processes to identify and measure problems, then make strategies to fix them
- SOLID principles
Class Discussion: How Do Classical and Modern Processes Influence Design Erosion
Waterfall:
- Big design up-front: designs can’t be changed, so no rot but bad designs can’t be fixed
- Iterative waterfall: changes can consider how the design needs to change and how to best change it to reduce erosion
- If requirements do change within the waterfall cycle, rigid structure means hacks may be required to fit them in
Agile:
- Prioritizes working software over code quality
Waterfall in business:
- Very well-defined process
- Predictable: budget, time-lines written down
- Legal can take a look at the project before it starts
- Used in other engineering disciples
Design/Code Smells
NB: code smells can also refer to good smells.
An indication/symptom that something may be wrong: but does this mean it should be fixed? Two approaches:
- Purist: Where there’s smoke, there’s fire: fix it
- Pragmatic: check it out and fix it if it is major
Smells: Within Classes
- Long methods:
- Too much functionality: SRP, separation of concerns
- Lines of code
- Complexity
- Long parameter lists:
- High coupling
- May be doing too much
- How many arguments are too many? 3? 4?
- Large classes:
- Too much functionality: SRP
- God class
- Comments:
- A sign that the code is too hard to understand - unreadable code
- Code should read like prose
- May get out of date, leading to reduced trust in documentation, and may not be read
- Duplicated code:
- Pull it out into its own utility method
- Rule of three:
- Duplicate once
- Could be called over-engineering
- Maybe duplicate twice
- By the time you get to three, pull the code out into a method
- Duplicate once
- Combinatorial explosion
- Dead code, unreachable code
- Speculative generality:
- Making things more general for perceived future needs; over-engineering
- Oddball solution:
- Many different ways of solving the same problem
Smells: Between Classes
- Primitive obsession:
- Using primitives for everything rather than classes/objects
- Hard to read
- Can’t be extended; can’t add constraints etc.
- Keep data and methods together: can’t add methods to primitives
- No type-safety
- Data class: Classes with properties but no methods/functionality Keep data and behavior together
- Refused bequest:
- Children classes cannot implement parent contracts
- Inappropriate intimacy:
- Severe coupling between classes
- Indecent exposure:
- Fields that should be private but are public
- Lazy class:
- Classes that don’t have much functionality
- Increased complexity
- Message chains:
- Law of demeter
- Shotgun surgery:
- Change that requires multiple things to be changed
Metrics
If you can’t measure it, you can’t improve it
Peter Drucker
You can’t control what you can’t measure
Tom DeMarco
Some context-dependent measure of a project, usually measured over time to track how the project is improving or getting worse. However, the context can change over time, making interpreting the metrics and making comparisons over time more difficult.
Benefits:
- Quality assurance
- Software/project/development management
- Performance
- Tracking issues, bugs
- Estimations
- How to estimate?
- Management
- Identifying parts/modules to improve
- Prioritize work
- Reduce costs
- Understand where resources should be put
- Return on investment; measurable improvement to the code base
- Workload: can show management impacts of over-working the team
Dangers:
-
Gaming the system - improving metrics but not the code
- Goodhart’s Law:
Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes
- Goodhart’s Law:
-
Not having an understanding of the metric
-
Metrics being used by management in performance reviews
- e.g. LoC added, deleted, modified, survivability:
- File being indented/formatted causes many changes
- Moving methods to different files
- Getters/setters low-effort code?
-
Concentrating on a single metric
-
Reducing a complex situation to a single metric
-
Comparing betweeen two very different projects For a metric to be useful:
-
They should be easily and quickly calculated
-
They should be run often
-
Several metrics should be collected
-
Trends are more important than values
-
Outliers should be checked
-
TODO
Difficulties:
- Measuring code does not measure the design
- TODo
- Limited tool support (but getting better)
- Metrics require interpretation
- Ignorance is prevalent
Alternativ ways to identify code smells:
- Ownership and expertise
- TODO
- Social structures
- Experts
- Do people follow the most knowledgable or the loudest
- TODO
Measurements
McCabe’s Cyclomatic Complexity:
- Complexity of a function; depends on control flows
-
- Where:
-
is the number of edges - TODO -
is the number of nodes - number of statements -
is the number of conected components (exit paths) - TODO
-
Chidmaber and Kemerer OO Metrics:
- Weighted methods per class (WMC)
- Sum of method complexities in a class
- Depth of inheritance tree (DIT)
- Max inheritance depth
- Deeper trees: if something near the top of the tree breaks, everything below also breaks
- Number of children (NOC)
- Specialization polymorphism in terms of contracts?
- Number of immediate children
- Closer to parent contract, so higher is better
- Coupling between objects (CBO)
- Number of classes to which a class is coupled
- Response for class (RFC)
- Number of class methods, plus number of remote methods called directly by the methods (or through the entire call tree)
- TODO
- Coupling
- Lack of cohesion in methods (LCOM):
- Increase cohesion: keep related data and behavior together
- Decrease cohesion: single responsibility principle, separation of concerns, open-closed principle
TODO Lorenz and Kidd:
'Ar
Smells:
- Long methods:
- LOC: too long
- CYCLO: too many conditional branches
- MAXNESTING: nesting too deep
- NV: too many variables
- A lot more
Other:
- Process metrics:
- Team velocity
- Story points completed per unit time
- Burn down/up chart
- Code churn
- How often methods/classes get modified
- High churn:
- Dependents require updating as well
- Complex method that is not quite worked out
- KISS - simple is hard
- May lead to design erosion
- App crash rate
- Lead time
- Time taken from PO request to finished feature
- Active days
- Time taken from picking up feature to finishing it
- Mean time between failures (MTBF)
- Mean time to recover/repair from failures (MTTR)
- Team velocity
Refactoring
Refactoring will TODO
When to refactor?
- During development
- During: start clean
- After: to clean up
- When fixing a bug
- Code reviews
- The rule of three; duplication and speculative generality
TODO
- Check metrics
- TODO
- Check metrics - have they imrpoved
Corerectness:
- Formal: prove semantics and correctness of program transformation
- Implementation: unit/regression tests - ensure the implementations meet specifications
Rewrites:
- TODO
Reengineering:
TODO
05. Standards
Standards
Groups of international experts sharing knowledge to develop solutions to common problems found in a range of activities.
Standards help with:
- Sharing knowledge
- Identifying aspects/pitfalls
- Understanding why processes exist:
- Chesterton’s Fence
- Regulations are written in blood
- Creating a shared language
- Creating regulations/policies
- Understanding and addressing problems/challenges
- Giving customers some confidence in products/organizations
- And customers expect others to use them or at least know about them
- Allows consistency and interoperability between systems
Some organizations:
- ISO: International Organization for Standardization
- IEEE Standards Association
- International Electrotechnical Commission
Problems:
- Competing standards
- Usually very generic
- Can be heavy in red tape/process: requires lots of polices, documents, data gathering
- Little immediate reward; very boring
- May require certification to give confidence to customers
Understanding the standard does not mean you understand how to apply them/implement the solution or attain quality.
One of UC’s (5th?) student management system:
- 26 million paid initially to create new system, 2 million per year support
- Delivered
- Crashes; enrollment delayed
- Provider says issue with environment
- Buy new servers; still not working
- Upgrade network; still not working
- Give up; go back to old system
- Millions wasted, students may have enrolled elsewhere due to delay, admin delayed, staff annoyed
Quality and Service Standards
IEEE Standard 1012: system, software, and hardware verification and validation.
Determines if requirements are:
- Correct
- Complete
- Accurate
- Consistent
- Testable
4 integrity levels.
Verification and Validation (V & V)
Comparison:
- Verification:
- Checking the process during the life cycle activity
- Are we building the product correctly?
- There is some correlation between process and product; hence, a good process gives some confidence in the quality of the product
- Validation:
- Checking that the product meets the requirements at the end of life cycle
- Includes non-functional requirements, things that the customers may not have thought about
- Did we build the correct product?
- Giving customers confidence in the product
- Checking that the product meets the requirements at the end of life cycle
Verification:
- Evaluating work products during the life cycle activity to determine if it is being built according to the requirements
- Evaluation of:
- Plans
- Requirement specifications
- Design specification
- Code
- Test cases
- Through the use of:
- Reviews
- e.g. building on different OSes on Erskine (‘customer’) computers, under heavy load
- Walkthroughs
- Informally going through code and/or features with developers
- Reverse walkthrough: customer explains to developer how to use products to find misunderstandings
- Inspections
- Standups, formal sessions etc.
- Reviews
Validation:
- Checking that the code matches user needs and fulfills its intended use
- Were the specifications even correct?
- Evaluation of product/software
- Through the use of testing (e.g. acceptance testing)
Software integrity level (SIL)
- Negligible consequences if element fails - mitigation not required
- Minor consequences if element fails - complete mitigation possible
- Serious consequences if element fails
- Permanent injury, major system degradation, economic/social impact
- e.g. bank accounts etc.
- Partial to complete mitigation required
- Lots of quality assurance
- Grave consequences - no mitigation possible
- loss of life, systems, economic or social loss
NASA matrix:
Consequences
^
| SIL3 SIL4
|
| SIL1 SIL2
--------------->
Error Potential
Engineering V Model
Waterfall-type development lifecycle. On the left, the requirements and design are verified and validated; on the right, the system is verified and validated.
Stakeholder <------------------------ User acceptance
requirements Validates testing
\ ^
\ /
v Verifies /
System <---------------- System integration
requirements testing
\ ^
\ /
v Verifies /
Subsystem <----------- Integration
requirements Testing
\ ^
\ /
v Verifies /
Unit/component <------- Unit
requirements testing
\ ^
\ /
v /
Development
Task: SENG302 Verification and Validation
Verification (remember; verification requires artifacts that the assessor can view):
- Scrum board, burn-down chart
- Quality drops by the end of the sprint
- Code smells: Sonarqube
- Ceremonies: output of formal sessions
- Postman testing results
- Cucumber: integration testing results
- Were alternative flows missed?
- GitLab code reviews: DoD, comments, feedback
Did we have enough work products/artifacts to validate?
Design decisions: if not recorded, harder to validate/verify
Validation:
- Checking against ACs
- User testing: summaries of results
- Did we verify that the product was usable and against non-functional requirements, or just check the ACs we met?
- Feedback on story success
- Manual testing
- Record of steps, results and when they were run
- Of course, they were run multiple times right?
- Gets unwieldy when there are too many tests
- Record of steps, results and when they were run
As a customer to a SENG302 team, how does the team give confidence that they can deliver?
- Look at previous projects: verify/validate, independent auditors?
- Frequent meetings with team/leader?
- Output of previous sprints
- Improvements between sprints? Are they making the same mistakes every time?
- Do they have processes to ensure everyone can deliver?
- e.g. code style, code reviews, white-box testing
Problem: a large number of software projects fail. Why?
- Each software project is seen as uncharted territory
- Predictability is low
- Quantitative details (work artifacts e.g. metrics) either do not exist or exist to a superficial level with a single projects
- This makes it hard to gain confidence
- Software projects cost a lot of money
How can a software company ensure high-quality, low failure, high predictability and consistency?
Capability Maturity Model
Military software contracts were often late, failed, or went over-budget. The US DoD Software Engineering Institute developed the capability maturity model to quantify how mature a software business and assess its practices, process and behaviors.
Five aspects of CMM:
- Maturity levels
- Key process areas (KPA):
- Activities that, when done together, lead to an important goal being accomplished
- Goals:
- State required for key process area to be met
- Must include scope, boundaries, and intent of each KPA
- ‘Goals met’ define capability
- Common features:
- Practices that implement/institutionalize a KPA
- Five common features:
- Commitment to perform
- Ability to perform
- Activities performed
- Measurement and analysis
- Verifying implementation
- e.g. everyone understands standups, sprint planning, but also organization-specific processes or processes where each organization has their own unique twist
- Key practices
- Elements that contribute most effectively to implementation and institutionalization (including infrastructure)
- e.g. employee on-boarding processes
Maturity levels:
- Initial:
- Immature
- Ad-hoc, chaotic
- Uncontrolled
- Reactive
- Unstable environment
- Technical debt not considered
- Success dependent on competence and heroics of people
- Most SENG302 teams here at the end of the year
- Repeatable:
- Basic project management
- More than just burndown/burnup charts
- Some processes repeatable
- Requires processes to be explicit and documented
- Repeatable processes could have consistent results
- Repeatable processes likely to be project specific, not organization wide
- Defined:
- Sets of defined standard processes
- Documented
- Standardized
- Integrated
- Improvements documented over time
- Not necessarily validated
- This helped us last time, will probably help us in other projects
- User may not be competent in the following the processes
- Managed (capable):
- Processes are validated
- Enough data to prove the effectiveness of their processes
- Processes are monitored and controlled; data collected and analyzed
- Processes are effective across a range of conditions/projects
- e.g. different technologies, languages, teams
- Users competent in the processes
- Easily adapted to new projects without measurable loss in performance
- Performance is quantifiably predictable
- Optimizing (efficient):
- Processes tweaked and quantifiably improved over time by analyzing performance patterns, while also maintaining predictability and meeting improvement objectives
- Quantitative understanding of variation
- Everyone involved with improving processes
Capability Maturity Model Integrity (CMMI)
Successor to CMM by the Carnegie Mellon University.
Focuses more on results rather than activities when compared to the CMM.
CMM is based heavily on the paper trail; CMMI focused more on strategy (but still a lot of paper/documentation).
- Initial
- Managed
- Processes are planned, performed, measured, controlled
- Commitments with stakeholders, revised as needed
- Defined
- Standards, procedures, tools, methods created
- Developers can move between projects easily - consistency between teams and projects
- Quantitatively managed
- Also looks at sub-processes
- Optimized
- Continuous improvement; incremental/innovative technological improvements
Models for:
- Acquisition:
- Problems relating to suppliers and expectations
- Unique KPAs:
- Agreement management
- Acquisitions requirements development
- Acquisitions technical management
- Acquisition validation, solicitation, supplier agreement development
- Development:
- Any type of product development
- Trying to eliminate defects in products/services
- Unique KPAs:
- Product integration
- Requirements development
- Technical solution
- Validation and verification
- Services:
- Service demand, maintaining high-levels of customer services
- Unique KPAs:
- Capability and availability management
- Incident resolution and prevention
- Service continuity
- Service delivery
- Service system development
- Service system transition
- Strategic service management
Appraisals:
- Feedback on business’s maturity and how to improve
- Appraisal by certified third parties
Pros and Cons
Pros:
- Puts more emphasis on management
- Things must be done, not just developers wanting to do things
- Takes the onus of performance off the developer or even the team
- Consistency across projects, organizations, and time
- On-boarding new developers easier
- Learned information is not lost - processes are improved over time across the entire organization
- Bus factor (to a point)
- Figuring out the root cause of problems
- Emphasis on reflection, self-evaluation, monitoring, critiquing and continual improvement
Cons:
- Much more risk averse
- Maintaining their CMI certification becomes very important
- Level hunting/level up
- Instead of making the team better and improving technical performance, the organization is always looking to improve their paperwork, overhead etc. and gain a higher certification level
- Individual/team development
- Focus on process/management, not improving technical skills
- Process heavy
- Standards/processes do not automatically equate to quality
- No guarantee that the project will actually be developed using these processes; paperwork does not mean the team is actually following it
- Aimed more at strategic management, not development
Immaturity Models
Businesses that are below level 1: anti-patterns to avoid.
- Level 0: Negligence
- The company does not care about processes
- All problems are perceived to be technical problems
- Managerial/QA activities are deemed to be overhead, superfluous to the ‘actual’ work
- Reliance on silver bullets
- Level -1: Obstructive
- Company forcefully goes against productive processes; counterproductive processes imposed instead
- Processes are rigidly defined, adherence to the form is stressed
- Ritualistic ceremonies
- Collective management precludes assigning responsibility
- Flat management: lots of managers at the same level
- Conflicting decisions ensure developers are always doing something wrong
- Status quo über alles: status quo above all else
- Don’t do anything different; don’t try and improve anything
- Level -2: Contemptuous
- Disregard for good engineering institutionalized
- ‘Get stuff done’; arrogance against those following processes
- ‘It’s all academic’
- Process improvement activities actively disregarded
- Complete lack of training program
- Level -3: Undermining
- Discrediting competitors and pointing the finger at others rather than improving themselves
- Rewarding failure and poor performance
06. Testing
Testing strategies != testing
Debate: developers should not test their own code/program
Developers should develop, testers should test
Negative: developers should develop and test
Positive:
- Separation of concerns:
- One team makes, one team breaks
- Specialization
- Developers are not end users; testers can have a better understanding of the domain and of how users will use the software
- Know what your own code does: may only write tests you know will pass
- Developer may misinterpret requirements and write tests accordingly; a tester will have their own understanding of the requirements
- Gives developer more confidence in their code; experienced tester there to catch bugs
- Will write code that is easier to test as you know that someone else will be looking through it
Negative:
- Latency involved in the back-and-forth between developers and testing
- Counterpoint; writing code that is easier to test if there is a dedicated tester
- Should be writing easy-to-test code anyway
- Cycle of lots of testing and lots of development; if developers are also testing can switch develop-test workload; can’t really do that when there are dedicated testers
- Understanding existing tests helps when writing newer features
- Can’t really use TDD if dedicated testers are involved; TDD is iterative, which is hard to do when there are separate teams
Counterpoints against positive:
- Developers should have good understanding of users and problem domain anyway
- Code review process should catch requirements being interpreted
- Having dedicated testers may lead to complacency in code quality and review process
Counterpoints against negative:
- Some industries have strict regulations, require dedicated testers
- Domain knowledge: some is expected, but unrealistic to expect deep domain knowledge from every tester
- Lower bus factor: developer + tester both need to understand the domain
Quality
- Who creates quality? The developers or the testers?
- Who is responsible for (maintaining) quality?
- When is quality created?
Quality is created by the developer - so what is testing for?
Testing isn’t about unit testing or integration testing. It is the mindset; a systematic process of:
- Poking and prodding at at system to see how it behaves
- Understanding the limits of a system
- Determining if it behaves as expected
- Determining if it does what is is meant to do; it is fit-for-purpose
Testing is about how a user experiences the system and how it compares to our expectations.
In what contexts is testing not required?
- When making a one-off thing (a prototype)
- When it doesn’t matter if it works right
- Zero impact on people’s lives or livelihoods
- Small programs
Hypothesis Testing
The broad steps:
- Conjecture
- Some sort of expectation informed by your model of the system/world
- Hypothesis (and null hypothesis)
- A testable conjecture
- Conducting systematic testing of the hypothesis, possibly in multiple ways
- Supporting/rejecting the null hypothesis
Example:
- Model + Conjecture:
- Logging in is a difficult feature to create securely
- I have a feeling there is a flaw in the login logic
- Hypothesis:
- Insecure logins are possible
- Testing:
- Use ‘back button’ after logging out
- Refresh the page
- Checking if passwords are plain text
- Sending information as a GET request
- Logging in as
adminandpassword - Attempting an SQL injection attack
- Attempting a login with no password
- etc.
Verifiability vs Falsifiability
What will it take for us to be able to claim that there are no bugs in the system?
You must test every conceivable avenue and every single branch; verify the system. This is almost impossible, although formal proofs are possible in limited domains.
Karl Popper - The Logic of Scientific Discovery, 1934.
Verifiability: every single branch can be tested
Falsifiability: at least one example that contradicts the hypothesis can be found
Hence, there is a large asymmetry between the two: when making scientific hypotheses, we find evidence to support or disprove the hypothesis but we can never prove the hypothesis is true.
Testing vs. Automation
Automations help with making the testing process easier; it is not testing itself.
Testing is the human process of thinking about how to verify/falsify.
Testing is done in context; humans must intelligently evaluate the results taking this into account.
Biases
Confirmation Bias
The tendency to interpret information in a manner that confirms your own beliefs:
- x is secure
- In what way? How does it need to be used? What are its limits?
- 100% test coverage means there are no bugs
- Documentation being used to confirm a tester’s belief about the SUT (system under test)
- Assumes that the SUT’s documentation is completely correct
- Positive test bias
- Testing positive outcomes is verifying; instead you should be attempting to falsify by choosing tests and data that may lead to negative outcomes
Congruence Bias
Subset of confirmation bias, in which people over-rely on their initial hypothesis and neglect to consider alternatives (which may indirectly test the hypothesis).
In testing, this occurs if the tester has strategies that they use all the time and do not consider alternative approaches.
Anchoring Bias
Once a baseline is provided, people unconsciously it as a reference point.
Irrelevant information affects the decision making/testing process.
The tester is already anchored in what the system does, perhaps from docs, user stories, talks with management etc. and not consider alternate branches.
Functional fixedness: a tendency to only test in the way the system is meant to be used and not think laterally.
Law of the Instrument Bias
Believing and relying on an instrument to a fault.
Reliance on the testing tool/methodology e.g. acceptance/unit/integration testing: we use x therefore y must be true.
The way the language is written can affect it as well. e.g. the constrained syntax of user stories leads to complex information and constraints being compressed and relevant information being lost.
Resemblance Bias
The toy duck looks like a duck so it must act like a duck: judging a situation based on a similar previous situation
e.g. if you have experience in a similar framework, you may make assumptions about how the current framework works based on your prior experience. This may lead to ‘obvious’ things being missed or mistaken.
Halo Effect Bias
Brilliant people/organizations never make mistakes. Hence, their work does not need to be tested (or this bug I found is a feature, not a bug).
Authoritative Bias
- Appealing to authority
- Testers feeling a power level difference when talking to developers
- Listening to what management wants rather than what should be tested
- Management should be told about the consequences of any steps that are skipped
Types of Testing Techniques
Static testing:
- Looking at the static code or document
- Static code analysis, cross-document tractability analysis, reviews
Dynamic testing:
- Forcing failures in executable items
Scripted vs unscripted tests; compared to to unscripted tests, scripted tests:
- Are repeatable, providing auditability and verification and validation
- Unscripted tests have generally have little to no records and are not repeatable
- Allow test cases to be explicitly traced back to requirements; test coverage can be documented
- Allow test cases to be retained as reusable artifacts for current and future projects, saving time in the future
- Are more time-consuming and costly, although this may be mitigated by automating the tests
- Have test cases are defined prior to the execution, making them less adaptable to the system as it prevents itself and more prone to cognitive biases
- Unscripted tests allow testers to follow ideas and change their behavior based on the system’s behavior
- Are boring; testers may lose focus and miss details during test execution
- Unscripted testing requires more thought and hence are less prone to biases
Testing Toolbox
Three main classes:
-
Black box testing:
- Specification-based testing: does it meet the user-facing requirements?
- No access to internals
-
White box testing:
-
Structure-based testing
-
Full access to implementation
-
-
Grey box testing:
- A domination of black and white box testing
Unit testing:
- White box testing
- Test individual units
Integration testing:
- Testing the interface between two modules
- API testing
- Grey box
System testing:
- Testing the system; does the system do does it is meant to?
- Black box test
- Many types of tests: regression, performance, sanity, smoke, installation etc.
Smoke testing:
- AKA build verification/acceptance testing
- Pumping smoke into the pipe and seeing if any smoke comes out of cracks
- Testing to see the critical, core functionality works (e.g. can it boot)
- A time saving measure: is the system stable enough that we can go into the main testing phase?
Sanity testing:
- Very high-level regression test, similar to smoke testing
- Testing if it is sane; does the system perform rationally and do what it is meant to do?
Regression testing:
- Verifying that the system continues to behave as expected after something has been modified
- Each test targets a specific small operation
Acceptance testing:
- Formal tests; used during validation
- Checking if the system satisfies requirements
- Customer decides if it is accepted
- Types:
- End-use acceptance testing (UAT)
- People simulating end-users test the system
- Business acceptance testing (BAT)
- Checking that the system meets the requirements of the business
- Regulations/standards acceptance testing (RAT)
- Alpha/beta testing
- Accessibility testing
- Accessible by the target audience
- Text contrast, colors, highlighting
- Magnifications
- Screen readers
- UI hierarchy
- Special keyboards
- User guides, training, documentation
- Performance testing
- Non-functional requirements
- Is the system fast enough?
- Load testing (at the expected load)
- e.g. UC network was tested and found to perform great, but many students would log in to lab machines at the start of the hour and overload the system
- Stress testing (under max load or beyond for long periods)
- Data transfer rates, throughput
- CPU/memory utilization
- Running the system on a client with limited resources
- Or on networks where certain resources may be blocked (e.g. China)
- Are the devices you are testing on representative of what clients will be using?
- ‘Service level agreements’
- End-use acceptance testing (UAT)
End-to-end testing:
- Scenario-based testing: testing a real scenario a user may run into, from the beginning to the end
- Uses actual data and simulated ‘real’ settings
- Expensive and cannot usually be fully automated
Security testing:
- Access, authentication and authorization
- Roles, permissions
- Vulnerabilities, threats, risks
- Present and future: think about what may happen in the future
- Attacks
- Data storage (security), encryption
- Type:
- Penetration testing
- Security audit
Test/Behavior Driven Development (TDD/BDD)
Development, NOT testing strategies.
Tests made in this process are prototypes and hence they .
TDD tests are blue-sky, verification tests rather than falsifiability tests. Additionally, they are prototypes and hence, TDD tests should (in theory) be thrown away and rewritten (sunk-cost fallacy).
Audits
How will you test the system?
Look at the tests, not the techniques.
James Bach - The Test Design Starting Line: Hypotheses - Keynote PeakIT004
Testing Certifications
Standards:
- Condensed experience, knowledge and wisdom from the domain experts that wrote the standards
- Provides confidence for management, customers, the development team and the government
- Standards != quality
International software testing qualifications board (ISTQB):
- Most popular testing certification
- Multichoice exams
- Teaches testing techniques, not how to test
- Testing is done by humans; testing techniques help humans do the testing
- Always take the context of the system under test (SUT) into account
In the exam:
- Four questions which provide scenarios
- Don’t just vomit out testing techniques
ISO/IEC/IEEE 29119-4 Test Techniques
Split into three different high-level types:
- Black/specification-based testing
- White/clear/structure-based testing
- Grey: combination
Specification
Equivalence Class Partitioning (ECP)
Partition test conditions, usually inputs, into sets: equivalence partitions/classes. Be careful of sub-partitions.
Only one test per partition is required.
e.g. alphabetical characters, alphanumeric, ASCII, emoji, SQL injection.
e.g. square root function could have num >= 0, int <= 0, float <= 0 equivalence classes
Classification Tree Method
Grimm/Grochtmann, 1993:
- Find all classifications/aspects
- Divide the input domain into subsets/classes
- Select as many test cases as are needed for a thorough test
e.g. DBMS:
- Classification aspects are:
- Privilege: regular, admin
- Operations: read, write, delete
- Access method: CLI, browser, API
- For each test, pick one value from each class
- Make enough test cases for ‘thorough’ coverage: do not need to have tests for every permutation
Boundary Value Analysis
Test along the boundary:
- Allows you to catch errors such as off-by-one errors
- Equivalence partitioning usually used to find the boundaries
- Check to ensure you have found all boundaries
Syntax Testing
Tests the language’s grammar by testing the syntax of all inputs in the input domain.
Requires a very large number of tests. Usually automated and may use a pre-processor.
Note that a correct syntax does not mean correct functionality.
Process:
- Identify the target language/format
- Define the syntax in formal notation
- Test and debug the syntax
- Use the syntax graph to test normal and invalid conditions
Combinatorial Test Techniques
When there are several parameters/variables. TODO
Reduce the test space using other techniques:
- Pair-wise testing
- Each choice testing
- Base choice testing
Decision Table Testing
AKA cause-effect table testing
Software makes different decisions based on a variety of factors:
- State
- Input
- Rules
Decision table testing tests decision paths: different outputs triggered by the above conditions.
Decisions tables help to document complex logic and business rules. They have CONDITIONS (e.g. user logged in or not) and ACTIONS that are run when the conditions are met (that are run by the user and/or system).
Cause-Effect Graphs
AKA Ishikawa diagram, fish bone diagram.
Document dependencies.
Syntax:
- Cause
cand effectenodes, both inside a circle- Intermediary nodes (e.g. AND joining two causes) have no label
- Lines connecting the causes to effects
- Not: ~ in the line
- OR: an arc
(intersecting the lines between the causes and the effect, and avnext to the arc - AND: an arc
(intersecting the lines between the causes and the effect, and a^next to the arc
Example
If the user clicking the ‘save’ button is an administrator or a moderator, then they are allowed to save. When the 'save” button is clicked, it should call the ‘save’ functionality.
If the user is not an admin or moderator, then the message in the troubleshooter/CLI should say so.
If the ‘save’ functionality is not hooked up to the ‘save’ button, then there should be a message about this when the button is clicked.
C1: the user is an admin C2: the user is a moderator C3: the save functionality is called
E1: the information is saved E2: the message ‘you need to be an authenticated user’ E3: the massage ‘the save functionality has not been called’
c1
\ ----~--- e3
\ /
v ( > --------- e1
/ ^(/
/
c2 / ____/
___/
c3 _/___________ e2
More complex diagrams should use fishbone diagrams.
State Transition Graphs
- States
- Transitions between states
- Events (that trigger transitions)
- Actions (resulting from transitions)
Scenario Testing
Scenarios are a sequence of interactions (between systems, users etc.).
Scenarios should be credible and replicate an end-user’s experience. They should be based off of a story/description.
Scenario tests test the end-to-end functionality and business flows, both blue-sky and error cases. However, scenario tests should not need to be exhaustive - these are expensive and heavily-documented tests.
Scenario tests also test usability from the user’s perspective, not just business requirements.
Random/Monkey Testing
Using random input to test; used when the time required to write and run the directed test is too long, too complex or impossible.
Heuristics could be used to generate tests, but care should be taken to ensure there is still sufficient randomness as to cover the specification.
There needs to be some sort of mechanism to determine when a test fails and the ability to be able to reproduce the failing test.
Monkey testing useful to prevent tunnel vision and when you cannot think laterally.
Structure-Based Techniques
Structure and data.
Statement Testing
AKA line/segment coverage.
Test checks/verifies each line of code and the flow of different paths in the program.
Conditions that are always false cannot be tested.
Similar to BVA except it is focused more on the paths rather than the input.
Branch/Decision Testing
Test each branch where decisions are made.
Branch coverage:
- Minimum number of paths which will ensure all paths are covered
- Measures which decision outcomes have been tested
All branches are validated.
Data Flow Testing
Test for data flows, detects improper use of data in a program such as:
- Variables that are declared but never used
- Variables that are used but never declared
- Variables that are defined multiple times before being used
- Variables that are deallocated before being used
It creates a control flow graph and a data flow graph; the latter represents data dependencies between operations.
Static data flow testing analyzes source code without executing it, which dynamic data flow testing does the analysis during execution.
(e.g. data just passing though a class without being used directly by it?).
Experience-based Testing
Error guessing: get a experienced tester to think of situations that may break the program.
Error guessing:
- Depends on the skill, experience, and intuition of the tester: no explicit rules or testing methods
- Can be somewhat: list possible defects/failures and design tests to produce them
- Can be effective and save time
07. Project Management
Industry compared to SENG302:
- Scrum masters usually normal developers: saves money
- Focus on the product, not the developers
- As long as Moffat is in charge, there will never be real customers (again)
- Efficiency: streamlining resources and money
SENG302 teaches you an ideal way that software development should work, which businesses which may not follow for efficiency reasons.
Exercise: Design a Software Methodology
Exercise: design and justify a software methodology that will replace current agile methodologies:
- Description
- Timeline
- Roles, ceremonies, events
- Processes
Use limited subset: Assume: Company building project with its own PO (not an agency building products for an external customers):
- Stakeholders = company staff + end users
- Product manager: developer in charge
- Specialized developers
- Sub-teams
- Team lead: normal developer
- Team lead in charge of feedback
- Team lead: normal developer
- Shared backlog visible to everyone
- ‘Exchange’ programs: peer review code from other sub-teams to get a better idea of the challenges faced outside their own sub-team
- Sub-teams
- Agile but not agile
- Development cycles:
- Interview and gather feedback from potential end-users
- Determine how the product being built will fit into their workflow
- PM picks features
- … Isn’t this Agile
- Continuous development: ship as soon as a feature is done
- Team meetings:
- Weekly meetings within sub-team; team lead
- Developer adds log/message as task is completed
- Gather feedback
- Dev team also spends some time as customer service
- Interview and gather feedback from potential end-users
- 20% of developer time also spent on whatever they want: QoL improvements that never get prioritized
Other teams:
- Not scrum 1:
- Stories planned
- Sprint length determined by customer
- Daily stand-ups
- Informal weekly demos
- Pirate:
- Captain: PM
- Quartermaster: represents team
- Daily standups, must be virtual
- Three week sprints, one week verification/validation, documentation etc.
- Anything you should have done but didn’t do during the sprint
- Weekly planning meeting
- Chaos engineering:
- Team leader: unilateral authority. Rotates every sprint
- Abuse authority, get screwed by the team next week
- Week long sprints
- No formal planning beyond this point
- Team can discuss among themselves knowing the rotation order
- Negotiates with PO, but team leader has control
- PO doesn’t own the product
- Anything not completed gets dropped; up to the new team leader to decide
- Meeting at start of the sprint: task delegation
- Team leader: unilateral authority. Rotates every sprint
- Distributed team
- Team representative summarizes work and presents to PO
- Retros every month: written report
- Variable teams
- Scrum as a base
- No estimates: tasks discussed in planning period, but not estimated
- Team chooses sprint duration
- Team decides number of developers each sprint
- Developers not part of sprints work on general codebase issues
- Randomized task allocation
- Tasks should all work towards some small theme/story
- Continuous
- PO: reviews code
- Planner: generates tasks, estimates
- Testers: TDD
- Developers: develop
- Daily stand-ups
- Monthly process review: entire team
- Not scrum 2:
- Scrum, but even more agile
- Stories with ACs defined by PO
- Day of spikes to research
- Sprint length defined by this
- Tasking done by developers as you go: no estimates
- Team leader acts as team’s task reviewer
- Genetic algorithm
- No planning, two week sprints
- Half of the products die
- Products randomly merged together into new products
- How? Git merge…
- Developers merge as they see fit; could completely remove a branch
- All products get deployed: product that makes the best money survives
- All the different species are documented
- Zoom people
- Agile/scrum, but better
- Had no time to fix bugs etc.
- Senior dev assigns tasks, with soft deadlines for each tasks
- Also assigned to bugs/issues
- No sprints: too much pressure to complete tasks on time
- Daily update/progress message
Analysis:
- Most lacked justifications: why
- Principles: must mention testing, technical debt, quality
- Comments:
- Shows difficulty of coming up with a good process
- Large companies:
- Different departments do different work and have different requirements, leading to each team has its own process/methodologies
- Internal/external products, small/large organizations
- Most were based around Agile
- PO role varied from God to no influence
Project Management
- Waterfall
- Agile
- Scrum
- ScrumBut
- ScrumAnd
- TODO
Product manager:
- Fredrick Winslow Taylor, 1856-1915
- ‘Father’ of scientific/modern project management
- Documented labor processes
- How much raw material is coming in, how efficient is the output?
- Standardized processes and measured performance
- Aimed at business owners:
- Can reduce labor cost by doing XYZ
- Wanted to get rid of unions: unpopular
- Metrics used to measure individual performance: never do this
- Henry Gantt, 1861-1919
- Focused on scheduling: task dependencies
- Appealed to workers: short workweeks, higher pay
- Created Gaant chart
- Used by navy shipbuilders, WW1 logistics/supply
- Great depression:
- Workers used to be employed at a single company for life
- Workers laid off or companies fold down
- Works Progress Administration: Hoover Dam, airports etc. build
- Many projects in progress in parallel
- Led to project managers
- Lockheed Martin: mounting missiles to submarines
- Program evaluation and review technique (PERT) charts:
- Optimistic time for project (best case)
- Most probable amount of time (normal)
- Pessimistic time (worst case)
- Expected time (best estimate plus delays)
- Program evaluation and review technique (PERT) charts:
- PERT further developed by DuPont, Remington Rand:
- Critical path method (CPM): tasks which, if delayed, delay the entire project
- Critical path is the most important determinant of schedule
- PERT and CPM together:
- Visualized dependencies for better understanding
- Identified critical paths
- Calculated project times: consider early start, late start, slack/delays for each activity
- Probability driven: probability of completing at a particular time
- Visual charts
- Could get massive
- Waterfall project management
- Not good for:
- Variability/uncertainties
- Flexibility/changing scope
- TODO
- Toyota: 14 Principles
- Published 2001
- Very large influence on software development
- Car manufacturing: a lot of raw material and a lot of interdependent steps led to a lot of waiting time in case of delays
- Each area had to have a lot of raw materials just-in-case
- Goal: manage bottlenecks for each area in order to manage work in progress
- Philosophy:
- Base management decision on long-term philosophy, even at the expense of short-term financial goals
- Ethical values: can do what you say
- Cultural values
- TODO
- Create continuous processes to bring problems to the surface
- Eliminate waste through continuous management
- The product can continue as long as the customer keeps paying us: iterative and continuous
- Use ‘pull’ systems to reduce overproduction
- Raw materials ‘pushed’ into the next steps
- Rather, work should be pulled from the steps before
- If customers don’t need the product, nothing should get made
- Snow-ploughing: tasks get pulled to in progress when a task gets done
- Level out workload
- Reduce unevenness
- Consistent, sustainable stream of work
- Reduce muri TODO
- Culture of quality
- If a problem is found, fix it
- Get quality right the first time
- Refactor continuously, re-engineer rarely, rewrite never
- Measure and pay off technical debt
- Process for continuous improvement
- Kaizen: continuous improvement
- Empower employees: sof-organizing, self-directed teams
- Inspect and adapt
- Processes: review, retrospectives, stand-ups
- Reflection
- Efficient workspace, use visual control
- Task board
- Use often and regularly
- Keep in a prominent place
- Red/green: always know the current status
- Visualize pipelines, builds etc.
- Place retro items
- TODO
- Task board
- Use reliable, thoroughly-tested technologies
- Use the appropriate technology for the job: don’t push technology onto the project
- Applies to languages, frameworks/libraries, design patterns
- Grow people/leaders who thoroughly understand the work, philosophy, and who will teach it at others
- Scrum masters
- Should not be developers
- They should be learning about whatever they will be teaching the team
- They should be teaching/educating the customer so that they understand what they are doing and what the team expects from them
- Flat hierarchy
- TODO
- A ‘learning organization’
- CMMI: organization must improve
- Scrum masters
- Develop exceptional people/teams who follow the organization’s philosophy
- Success is based on the team, not the individual
- Remove silos, lower bus factor
- Team, not individual code ownership
- Javadoc: has fields for author name
- Create and document processes, principles
- Flat hierarchy: self-directed, empowered teams
- Requires even experience
- TODO
- Initiative: someone must take the initiative and drive things forwards
- Teach individuals
- Continuous improvement (Kaizen)
- Respect your extended network and help them improve
- Suppliers etc. are also stakeholders
- Teach customers (e.g. methodology being used, what is expected of them)
- Update customers/stakeholders often and get feedback from them
- Use processes that increase communication and transparency
- Experience it first-hand to understand the situation
- Waterfall approach: what the customer asked for was not what the end-user actually needs
- Need to see for yourself to understand the situation
- Empathy driven design
- Think about the customer’s environment
- e.g. network speed, computer performance, familiarity with computers
- Make decisions by consensus, consider all options, implemnt rapidtly
- Candidates olutions with pros/cons
- Spike and prototype
- No hero culture: team is involved in decisions
- High communication: task board, logging, communication tools, ceremonies, meeting
- Fail fast, fail often
- Become a learning organization through relentless reflection (hansei) and continuous improvement (kaizen)
- Document learning from previous projects
- Evaluate, measure, analyze, reflect
- Use metrics to improve
- e.g. Sprint velocity
- Not often done in business: no immediate ROI
- Need to understand what the metrics mean
- Tools aren’t very good
- Defaults settings must be changed: otherwise can get overwhelmed
- Use tools to give real-time metrics
- Standardize across the organization, but with flexibility built in
- Communicate findings across the organization
- Base management decision on long-term philosophy, even at the expense of short-term financial goals
Three M’s (wasteful actions)
Used in Lean
- Muda
- Wastefulness
- Work that adds no value
- Goldplating
- Things you think the customer may need but never will
- Over-engineering
- Compare to work that adds value but is not recognized
- Mura
- Irregularity/lack of uniformity in the work
- Variability that causes muda
- Kanban attempts to address this
- Muri
- Don’t overwork the people or the equipment
- Leads to unsustainable development
- Prone to failure
Waterfall
There is no development method called ‘waterfall’: it is an umbrella term.
Software development life cycle (SDLC):
- Requirements analysis
- A huge amount of documentation, UML diagrams etc.
- Legal representatives from both companies often involved
- Who’s fault is this?
- Planning
- Every decision signed off by management and/or the customer
- Software design
- Development
- Testing/Validation
- Deployment
- Maintenance
Waterfall can be iterative: a lot of overhead.
Why use waterfall?
- Scheduling: dependencies known and can be time boxed
- Clients can give them requirements and then ignore them until it is done
- For projects with known scopes
- Agile projects have no ending
- For critical systems:
- Avionics/space, medical, infrastructure
- Low-trust environments:
- Government departments
- Big institutions
- When someone needs to be blamed and pay up
Waterfall TOOD:
- Distinct sequential phases
- Well known problem:
- Low amounts of research needed
- Low amounts of flexibility needed
- Predictable:
- Dependencies can be mapped out
- Known architecture and technologies
- Very high quality is required
- Time devloted to getting high quality
- Higher expense built in (known phases)
- Low-trust between the business and customers
Agile is very expensive:
- Changing requirements leads to more re-engineering
- High technical debt: fast time to market, but has costs further down the road
Extreme Programming (XP):
- Focus on adaptability (rather than predictability)
- Assumes that requirements cannot be predicted at the beginning of software projects
- Four values
- Improve communication
- Seek simplicity
- Seek feedback
- TODO Courage
- Four activities:
- Coding
- Testing
- Listening (to customers, to the team, to stakeholders)
- Designing
- Good programming practices pushed to the extreme (or what was considered extreme when it was created):
- Code reviews:
- Reviewed all the time through pair programming
- Rather than at the end
- Testing:
- Test all the time
- Maybe through TDD ()
- Everyone tests, including the customer
- Design:
- Everyone is responsible for the design
- Refactor continuously
- Simplicity:
- Start with the smallest, simplest solution
- Integration testing:
- Integrate multiple times a day
- Short iterations:
- Make iterations as short as possible
- Release plan(months), iteration plans(weeks), acceptance test (day), stand-up meeting (day), pair negotiation (horus), unit testing (minutes), pair programming (seconds)
- 12 practices
- Planning games, user stories
- Small releases
- Metaphors
- Simple design: YAGNI
- Testing
- Refactoring
- Pair programming
- Collective ownership
- 40 hour weeks: sustainable development
- Code reviews:
Agile principles:
- Agile is not a methodology
- You do not ‘do’ Agile
- There are other methodologies/frameworks that follow the Agile principles
- Iterative, structured
- Very product-based: product and quality matter
- Very team-based: self-organizing, flat hierarchy
- Principles:
- Values:
Lean:
- Came from manufacturing
- Created by the Lean Enterprise Institute (1997)
- Focuses on eliminating waste
- Onlly have what we need at the point in time where we need it
- Five key principles:
- Identify value:
- What does the customer need?
- Map value stream:
- What steps and activites are required to make and deliver the product?
- Eliminate steps that do not create value
- Identify value:
- Create flow:
- Remove bottlenecks
- Keep value-occurring steps in tight sequence
- Establish pull:
- Just-in-time delivery: customer gets what they need as they need it
- Seek perfection:
- Continous improvement
Scrum:
- Flexible but formal
- Framework - does not define processes
- Does not define roles, how to test etc.
- Simply gives you a structure to work around
- Ceremonies
- Roles
- Regular, fixed-length sprints
- Event-driven
- Fairly predictable with defined, bite-size goals
- Small, self-organizing teams
- Can have teams-of-teams
- Orthogonal teams (e.g. UX team responsible for all product UX within a organization)
Kanban:
- Very flexible
- Concerned with throughput
- Maximizes efficiency/flow
- Not as much process: no ceremonies, no roles
- No iterations: continuous flow
- Good for continuous production
- Not concerned with teams
Scrumban:
- Scrum structure with Kanban’s flexibility
- Short iterations
- No roles
- Not concered with teams: can conatin generalists and specialists
- Daily standups and optional ceremonies
- Sometimes called ‘controlled Kanban’
Project Managment Certifications:
- A few avaialble. The most popular are:
- Project Manaagement Prfessional (PMP) certification
- Offered by Project Mangemetn Institute (PMI)
- Use Project Mnagement Book of Knoweldge (PMBOK Giude)
- Principles of project management
- TODO TOOD
- PRINCE2 Principles
- Continued business justifiation: tasks must have a clear ROI and the use of time/resources must be justified
- TODO
- Manage by stages
- Manage by exception:
- Measure delegated authority and outcomes within the process
- Create contingencies and action plans
- Focus on products:
- Product requirements determine work activity, not the other way around
- tailor to suite project environment
- highlight the necessary risk, time, capability, size, complexity and cruicial components for the project
- Use the tools (e.g. langague, frameworks) that best fit the enevironment
08. Riel’s Heuristics
Arthur Riel, 1996. 60 guidelines/heuristics for OO programming
Hide data within its class:
- Information hiding
- Private fields should only be accessible by internal methods; public interfaces should be used to provide services
A class should not depend on its users:
- Users of a class should depend on the public interface
Minimize the number of public methods:
- More methods mean it is more difficult to use and maintain
- Reuse is harder
- YAGNI
Have a minimal public interface:
- Single responsibility principle
- The more public methods, the more restrictive the contracts
Avoid interface bloat:
- Do not put implementation details (e.g. methods used within public methods) within the public interface of the class
- Do not put ‘extra’ items into the interface
Avoid interface pollution:
- Do not clutter the public interface with things that the class’s users cannot or will not use
- Don’t over-engineer
Nil or export-coupling only:
- Class should only use operations in the public interfaces of other classes, or not use it at all
- Program to the interface, not the implementation
One key abstraction:
- A class should capture one and only one key abstraction
- One ‘noun’ in requirements
- Too may key abstractions -> God classes
Keep related data and behavior in one place:
- Law of Demeter
Separate non-communicating behavior:
- Spin off non-related information into another class
- If a few methods rely on one set of data and other methods work on another: no/little communication between the two sets, separate them into two classes
- Reduce coupling
Model classes, not roles:
- Abstractions should model classes rather then the roles the objects play
- Look at behavior, not roles
- e.g. Lecturer, Tutor, Student are all roles of Person
Distribute system intelligence:
- Uniformly distribute system intelligence horizontally
- Top-level classes should share the work uniformly
- Avoid a system with a God class combined with several minor classes
Avoid God classes:
- Too many responsibilities
- Too many irrelevant functions or global variables
- Difficult to maintain
- Centralized data center
- Could be useful when performance is required
- Be very suspicious of classes with Driver, Manager, System, or Subsystem in its name
Beware of many accessors:
- Classes with too many accessors may mean:
- Related data and behavior are not being kept in one place
- Too many unrelated things are together: SRP
- If other methods can fiddle with internals, the class no longer has control over its own data
Beware of non-communicating methods:
- Classes which have too many non-communicating methods: methods which operate on a subset of the class
- Interfaces should be dependent on the model
Interfaces should be dependent on the model:
- The model should never be dependent on the user interface
- The model should not need to know about the user interface
- Changing the user interface should not change the model
Model the real world:
- Whenever possible: intelligence distribution, God class avoidance, keeping related methods/behavior together
- Easier to understand, decompose and maintain
- May go against tell, don’t ask
Eliminate irrelevant classes
- Usually sub-classes with no behavior
- Can go against ‘avoid concrete base classes’
Avoid verb classes:
- Do not turn an operation into a class: operations should usually be methods
Agent classes irrelevant:
- e.g. Bookshelf, librarian, book: sorting should be done the bookshelf, not the librarian
Minimize class collaborations:
- Reduce coupling
Minimize method collaborations:
- Reduce the number of messages sent between a class and its collaborators
Minimize methods between collaborators:
- Reduce dependencies
- If there are many, should the class and its collaborator be merged into a single class?
Minimize fan-out:
- Product of number of messages defined by the class and the messages they send
Containment implies uses:
- If a class contains objects of another class, the container should send methods to the contained objects: a use relationship
- Otherwise, the container class must have an accessor for the object in order for it to be of any use, which violates the information hiding principle
Methods should use most fields of a class:
- Otherwise, there may be more than one key abstraction
- Lack of cohesion of methods (LCOM)
- Violates SRP
- Non-communicating methods
Limit compositions in a class:
- Developers should be able fit all objects for a class within their short term memory
- Limit the number of objects in the class: 7±2
- Most methods should use data
- Helps with keeping methods maintainable
Contain contents, not parents:
- A class should not know who contains it
Contained objects should not be able to use each other:
- Objects with the same lexical scope: those with the same containing class, should not have uses relationships between them
- Leads to dependencies between classes that don’t need it
09. Recap
Not assessed.
Accreditation of Qualifications:
- Washington Accord (WA): Professional Engineers
- Sydney Accord (SA): Engineering Technologists
- Not engineers, but may be delegated authority by professional engineers
- Requires a degree
- Dublin Accord (DA): Engineering Technicians
- Requires a certification
WA graduate attributes:
- Assessable outcomes
- Gives confidence that the educational TODO
- The same for each discipline
Lifelong learning:
- critically analyzing what you are doing
- learning about concepts, not specific tools
SENG401:
- Question: be skeptical
- ‘It depends’
- Arguing/debating
- With justification and examples
- Arguing for the other side to what you believe
- Code ownership
- Estimation
- Should we?
- If not, what do we replace it with? Does it require replacement?
- Technical debt
- Broken window, hole in the roof
- Interest, paying it off, when? = Visibility access Design by contract Design erosion Code smell, metric Solving problems, research, balanced viewpoints, communication todo
Standards
- Why do we have them?
- Pros/cons What can we learn from then? Thinking about society-wide level Verification & validation CMMI Putting ourselves in the customer’s shoes? Organization-wide standards What does ‘maturity’ mean? When might we want to be appraised Software: never finished; Other disciplines
Quality What Why do we care? Difference in context = difference in strategy Testing strategy Testing techniques Not just automated testing Audit Observing a real team Data, analysis, interpretation Processes, code metrics Objective/subjective analyses: hard/soft V&V Live review: beside manner todo
What was the purpose of the audit? Different, unknown technology used by the 2022 projects Seeing them make the same mistake we did: reflection Team issues Team cohesion: sub-groups or working alone Step 1: awareness of issues Ignoring the root issue Importance of setting and following rules How different personality types interact with each other Differing expectations/goals between team members Recency effect: looking back on start of SENG302
What was the purpose of assignment 2:
Exam
- Take-home exam
- PDF
- Learn: auto-save didn’t work in one exam
- Submit PDF on Learn dropbox
- Can look up anything; cannot communicate with others
- 4 essay-style questions
- Incomplete scenario: must make and state assumptions
- ‘I would ask this question of these people: if this response do that’
- Try keep the answers succinct - essays, not a thesis
- Answers may have headings
- Balanced answers: do not take a strong stance
- Diagnosis, recommendations, justifications, examples
- Put the recommendations within the context of the scenario
- Cite articles
- 24 hours available
- Gives you time to think
- Reduces time pressure and anxiety
- Sleep!
- Parkinson’s law: the work grows to fill up the time
- DM Moffat on SENG401 Slack if required; class updates on exam channel
- Before the exam, feel free to chat with Moffat; alone or groups
- After the exam, cannot talk until results are released
Quality without name? Objective or subjective quality?
Christopher Alexander: young professionals: acceptance of standards that are too low.