01. Introduction
Course breakdown:
- Scrum tutorial: 15%
- First two labs: 8 person teams; assessment at end on scrum values
- Reflection report (weekly reading): 5%
- ~500 word report due week 6
- Acceptance testing/design principles: 20%
- Due week 12
- Final exam: 60%
- Software development methods/design principles (term 1/2 even content split)
Preparation/reading required before each lecture.
https://agilemanifesto.org
Core Values
- Individuals and interactions over processes and tools
- Working software over comprehensive documentation
- Customer collaboration over contract negotiation
- Responding to change over following a plan
Principles
Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.
Welcome changing requirements, even late in development. Agile processes harness change for the customer’s competitive advantage,
Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale.
Business people and developers must work together daily throughout the project.
Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.
The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.
Working software is the primary measure of progress.
Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.
Continuous attention to technical excellence and good design enhances agility.
Simplicity - the art of maximizing the amount of work not done - is essential.
The best architectures, requirements, and designs emerge from self-organizing teams.
At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.
Scrum
- Product owner
- Representative of the client
- Responsible for adding and prioritizing tasks in the backlog
- Scrum master
- Protects the team from the product owner and facilitates the activities
- Development team
- Should be self-organized and cross-functional - no set roles
Scrum empowers the team with a large amount of freedom to ensure the team can meet their goals.
Both individuals and teams should be able to learn and improve, and mechanisms should be in place to ensure knowledge/learning can be transferred across individuals/teams.
Lean
See https://roadmunk.com/guides/lean-development/ and [https://agilevelocity.com/7-principles-of-lean-software-development/](Agile Velocity).
Optimization process based on Toyota car manufacturing in the 1950s.
Principles
- Eliminate waste; anything that doesn’t provide value to the customer
- Partially done work, unnecessary code, bureaucracy, ineffective communication etc.
- Build quality in as a core aspect of the project
- Continuous development ensures issues are caught quickly
- Test-driven development
- Create knowledge; document or share the reasoning behind decisions
- Defer commitment; keep options open
- Don’t plan too far ahead in advance
- Don’t commit to ideas without understanding the requirements
- Deliver fast
- Respect people
- Healthy amounts of conflicts
- Communicate proactively
- Optimize the whole
- When optimizing processes, ensure this benefits the project as a whole
Kanban
See https://kanbanize.com/kanban-resources/getting-started/what-is-kanban.
In summary, Kanban is layer on top of (but not replacing) existing processes that encourages incremental changes and decision making at all levels.
Kanban limits the number of work in progress items in each stage to reduce the amount of wasteful context switching and avoid clogging up the pipeline further up.
Metrics:
- WiP: number of work in progress items
- Queue: compares tasks in WiP to queue to measure efficiency
- Throughput: average work units/time unit
- Lead time: time between customer demand and deployment
02. Scrum 101
Scrum Values
- Openness to feedback and ideas
- Focus; avoid distractions
- Respect others, even when things go wrong
- Courage to take risks and fail
- Commitment to the team and project
Lifecycle
Sprints center around the backlog; user stories and functionality that needs to be completed:
- Sprint retrospective and planning
- Collaborative development and testing (TDD)
- Report to the team, offer help
- Sprint release; every sprint should end with a shippable product
- Sprint review meetings with the product owner
Ceremonies
- Standups (daily scrum meetings)
- Sprint planning meetings
- Retrospectives
- Sprint review meeting
Initial Startup
- Start with a vision:
- What should it do
- Why should it do it
- How will it be used
- Who will use it
- Ask what goal for the product is in one month, one year etc.
- Refine its objectives and discuss these with the stakeholders (inc. product owner)
- Create an initial backlog filled with user stories
- Agree on working modes and standards (coding standards, communication channels, tools etc.)
Compared to kanban which is task-oriented, scrum has a larger initial phase before the first sprint.
Sprint Planning
- The backlog should be cleaned/refined by the product owner; other members may assist
- Priorities should be clearly stated
- The sprint should have a global goal; a theme which ensures everyone is targeting the same direction
Chunking stories into tasks may help with prioritization
Each implementation task should be sufficiently described: mockups, design, acceptance criteria etc.
User Stories
A promise of a conversation to be had:
- An intension of some functionality
- Conversation between the non-technical product owner and dev team
- Has acceptance criteria - don’t just tick off criteria blindly; go back to the conversation and what the product owner wants
Part 1: The What
- PO presents highest priority stories
- Ensure the team fully understands the user stories the team may commit to this sprint (plus one or two more - priorities may change so it is not necessary to understand them all)
- Estimate the complexity of each story in terms of points
- Use previous velocity (points delivered in previous sprint) to estimate amount of work that can be done
- Commit to the stories that will be taken on
Planning Poker
Uses Fibonacci-like progression (0.5, 1, 2, 3, 5, 8, 13, 20, 40, 100, ∞, ☕, ?); reflects increasing uncertainty in estimates.
Play the hands, then discuss why each person chose the value. Repeat until a consensus is reached.
For the first time the group is together, come up with a hypothetical scenario to calibrate what each number means.
NB: Don’t try and convert it to hours
NB: for large numbers, using the exact Fibonacci values makes the estimate seem more accurate than it really is.
Part 2: The How
- Break down stories into small tasks
- Tasks should be SMART
- Around 3 hours of work or less
- Describe them thoroughly so that anyone in the team can do it
- Estimate task durations (in hours) collaboratively; ensure a consensus is reached
- A story is done only if all acceptance criteria are met
For the first few sprints, make an estimate and then double it; delays will cascade and affect dependent tasks.
Monitoring
- Use a burn-down chart; the estimated amount of work left over time
- At the beginning, break down all stories into tasks and estimate the time each takes; otherwise, the chart will not be very useful
- If the sprint backlog will be finished early, estimate as a team how many more stories you can take on and contact the PO
- The PO should NEVER be surprised
- Let PO/SM know early if the sprint backlog won’t be cleared
- Surprise means poor communication and likely lack of alignment between stakeholders and product
- Spikes - research/experimentation into some method/technique should be time-boxed to ensure you do not waste an excessive amount of time
Standups
- Used in both Scrum and Kanban
- Must be short and straight ot the point
- Max 5 minutes/member
Every morning (if full time: 2/week for SENG302), answer three questions:
- What did you achieve yesterday
- What will you do today
- What issues are you facing
- Don’t waste an entire day tring to solve an issue alone; ask someone
Review
- Demonstrate outcomes to the PO and stakeholders
- Follow a scenario which combines user stories that were completed in the sprint
- Use realistic test data
- Stakeholders sign off on the functionality
- Gather feedback
Retrospective
The team discusses what happened in the sprint.
- Prefer coffee shops/break rooms compared to large, impersonal rooms
- Come prepared with issues/suggestions communicated before hand
Discuss issues about:
- Communication: within team, or with PO/stakeholders
- Processes
- Scope: clarity of product vision
- Quality
- Environment: is the team dynamic toxic?
- Skill: is training required?
Ask what are we doing well? How can we fix improve it?
Return to the next retrospective asking if improvements were made.
Bubble method: create a list of issues alone; pair with another team member an discuss. Repeat until the whole group is together.
Circle method: create a list of action items, sorting them by how well they went. Group close and related items together and fix these as a whole.
Lessons from the Tutorial
- Tasks in the story move together
- Tick off tasks when done
- Review ACs before starting and when reviewing the story
- Each story should have a sub-group assigned to them, and within the sub-group tasks should be assigned to one or two people
- Check the story points and its difficulty before deciding on sub-group size
- Coordinate between sub-groups to ensure a consistent design
- Subgroups that finish at a similar time should review the others’ work
What is Ready and Done?
Ensure all team members have the same understanding of these two words. What quality level is expected for ‘ready’ or ‘done’?
For stories, ready could mean:
- The story is given a point estimate
- Acceptance criteria are clearly defined
- Story is in the product backlog with the correct priority level
- All relevant documentation is attached
- All tasks from the story will stay in a single sprint
Done could mean:
- All acceptance tests passed
- No regressions
- Build/deployment scripts updated
- The product owner has reviewed the functionality
- End-user documentation updated
03. Agile Requirements Analysis
“The best way to get a messed up project is to start earlier than the basic requirements have been defined” - Mario Fusco.
Scrum
Roles
Product owner:
- Customer voice; represents the customer
- Translates user/customer demands into user stories
- Maintains and prioritizes the backlog
- Negotiates timing and content of releases with the team
Scrum master:
- Coach: facilitates the work and acts as the process leader
- Facilitates communication within and outside the team
- Represents management, but protects the team
Team:
- Everyone is a developer; no hierarchy
- Self-organizing and cross-functional
- Collective responsibility for achievements
Product Backlog Items
Epic:
- Large piece of work that may span multiple sprints
- Abstractly defined, high-level requirements
- Must be broken down into stories
Story:
- Small, well-defined piece of work with concrete and extensively defined acceptance criteria
- Must be handled in a single sprint
Task:
- Concrete and time-boxed piece of work
- May have subtasks
- Assigned to one or a pair of programmers
Product vs Sprint Backlog
Product backlog:
- Everything that must be done
- Prioritized and maintained
- Not all items have estimates
Sprint backlog:
- Items that will be handled in this sprint
- Fixed for the sprint; items shouldn’t usually be added or removed
- Only high-priority items may modify the current sprint
- All items have estimates
Scrum Board
All items should be in one of the following columns:
- Product backlog
- Sprint backlog
- In progress
- Review
- Done
Snow ploughing:
- Start the sprint with the highest priority items
- Group related items (e.g. same story) together; reduces cost of context-switching
Users
Personas
Can’t think in terms of some generic user:
- Hard to extract clear requirements
- Hard to identify added value
Personas:
- Fictitious people with fictitious details
- Characteristics (skills, environments, goals) related to the system
- User archetypes synthesized from common attributes
- Clear behavior patterns and goals
Keep the number of personas limited.
User Profiles (Actors)
Focusing on classes of users. Classes can be defined by the users’:
- Physical (e.g. disabilities)
- Cognitive (e.g. disabilities/motivations)
- Relevant social/ethnic/religious specificities
- Educational background
- Task experience
Comparison
Profiles and personas should be defined by user interviews, not imagination.
Personas focus more on what their motivations while profiles focus more on who they are
User Stories
- One goal; one interaction
- Natural language
- One set of acceptance criteria
Usually follows the template: As a role, I action so that value.
The story is a promise of the conversation.
INVEST
- Independent: stories should be implementable in any order
- Negotiable: invitation to a conversation; the product owner and development team should be able to discuss the story and make changes to it if necessary
- Valuable: should serve a purpose to the customer. A beautiful backend and no frontend will be useless to the customer
- Estimable: development time can be estimated
- Depends on team experience
- Spike may be required to make an estimate
- Larger stories are harder to estimate. Hence:
- Small: KISS. Larger stories are harder to estimate too
- Testable: the story and acceptance criteria are understood enough that tests can be written for it
Ask if you can break the story down further and still get value from them. If so, break it down.
Promote the story into an epic if:
- The story is vague or contains undefined terms
- e.g. ‘user can see item details in the list’. What details?
- Uses conjunctions or has multiple use cases
- Has hidden/unexplored business rules
- e.g. how long is cart content stored? What type of users can access this functionality?
- Has multiple display possibilities
- What devices does it have to work on? Mobile? Desktop? IE?
- Does the data have to be available in multiple formats?
- Has exception flows
- e.g. what happens after n failed login attempts
- Has unspecified data types or operations
Validating Requirements
- Valid: does the requirement reflect the users’ needs
- Consistent: do any requirements conflict with each other
- Complete: is the definition self contained
- Realistic: does your current technology/knowledge allow such a feature
Common practices:
- Review: talk with stakeholders and systematically go through requirements
- Prototype: build proof of concept, or write some sketches
- Test cases: from the user’s point of view, make sample usages
Slicing Tasks
Don’t have people dedicated to a particular layer of the project (e.g. frontend, backend, database):
- If one person fails to deliver, all the work will be wasted
- Causes tunnel vision
Instead, tasks should involve the whole stack:
- This allows you to deliver partial stories
- Greater focus on user needs and added value
SMART
SMART tasks are:
- Specific: everyone on the team has a full understanding of the task
- Measurable: the team has agreed on what ‘done’ means and the task has acceptance criteria
- Achievable: the task owner should be able to finish the task and/or ask others for help
- Relevant: the task provides value to the customer
- Time-boxed: the team has a rough estimate of how long it should take
04. Agile Team Management
Servant Leadership
- Empathy: understand and share feelings of team members/stakeholders
- Awareness: aware of impacts of their decisions/behavior
- Community: maintains a feeling of membership
- Persuasion: dialogue instead of coercion or commanding
- Conceptualization: focus on the big picture; trust in your team for daily tasks
- Growth: develop everyone’s skills
The scrum master should help, not direct, their team:
- Drop the ego
- Act fairly towards all team members
- Should be confident and humble
- Be accessible
Archetype of a Scrum Master
The scrum master should empower communication:
- Between team members and stakeholders
- Between engineering and marketing
- Inside the team
They also need to protect the team when necessary; act as a shield to ensure the team can be successful.
They should refrain from jumping into technical details:
- Trust in the team
- Do not rescue team members too quickly
- Avoid technical discussions during planning
Team Organization
- Recognize everyone as an individual while ensuring there is collaboration within the team.
- Everyone is a developer and does everything
- Work on different parts of the system
- Have some specializations; work on your core technical competencies, but also be competent in other fields:
- Ensure there is continuous knowledge transfer
- This ensures your team is resilient to unexpected losses
- Bus factor: what’s the smallest number of people that can get hit by a bus before your the project fails due to lack of knowledge/experience
- Developers should work outside their comfort zone
- Ensure there is continuous knowledge transfer
Agenda
Sprint planning:
- Ensure everyone knows their availability
- People may be working on other projects
- Check all deadlines for other courses
- Interruptions, team communication etc. is time consuming
Create an agenda:
- Don’t rely on memory alone
- Put all deadlines on the agenda
- Bring it to all planning sessions
- Make non-negotiable times clear at the beginning of the project
- Times where you will not reply even if the server is on fire
Changes
The sprint plan should be protected: decisions were made with the PO and should not be changed on a whim.
Stakeholders may come at any time and ask for:
- Small cosmetic changes; superficial makeup
- New functional changes; stories that should be estimated and planned
The developer must know the threshold between the two and communicate with the stakeholders:
- Ask if the added makeup worth it
- Decide whether the story can be added to the developer’s buffer
Issues and Bugs
Issues: problem identified before a review at the latest.
Bug: problem discovered at the earliest during later regression tests; these must be added to the backlog.
High priority bugs may be taken into account during the current sprint. This requires discussion with the PO (as items are being re-prioritized).
If direct communication is used instead of formal reports, ensure reproduction steps are communicated.
Impediments
- Long meetings: stick to essential stakeholders; keep them time-boxed
- Illness: don’t come, plz
- Broken builds: especially with CI/CD, this becomes the top priority and interrupts the whole team
- Tools: if you don’t have the right tools you can’t develop
- Third parties: consider alternatives or work-arounds
- Scope creep: review stories and tasks thoroughly to reduce the amount of creep
- Unreliable PO: can’t get rid of them, so form strategies to deal with them
- Team problems: use retrospectives and involve the scrum master as soon as possible. Don’t let the problem grow
- External incentives: plan around clubs and other life priorities/responsibilities
Record traces of all issues for sprint reviews and retrospectives so that the team can learn from it if you face the issue again. See [https://xkcd.com/979/](Wisdom of the Ancients)
Bad Behavior Patterns
- Taking over everyone’s jobs
- Command: controlling everyone’s jobs
- Public blaming and shaming
- Siloing: specializing into one aspect; low bus factor
- Poor code quality practices
Reactive behaviors:
- Fear: failure is an option
- Pressure: adding more buggy features on top of a poorly-built base
- Experiment: run spikes when necessary
- Urgency: fire-fighting is exhausting
05. Monitor Project Progress
How do you track project progress? Key Performance Indicator. KPIs are:
- Quantifiable measures
- Indicators of success/fails
- Semantics need to be defined per context
These could be things such as system uptime, income, returning visitors.
Metrics:
- The team must understand the metric
- The team must collect the data
- The metric must have value to the team
- Collecting the data should require little effort
Every Agile framework has its own set of metrics. Lean measures execution time; kanban on task flow; scrum on the team’s ability to deliver.
Monitoring Code Quality
Pair Programming
Pair programming is a great way to ensure high code quality and monitor task progression: the code you write is a reflection of the team dynamics.
Code Review
Review the code for:
- Invalid operation definitions (e.g. too few/many arguments) or usages
- Missing comments/documentation
- Missing unit tests
- Code quality
- Number of files, classes (e.g. god classes), LoC
- Code duplication
- Documentation/comment rates
- Code smells/technical debt
- Input sanitization
Sprint Reviews
The very last moment where you can track progress. Ask the PO before the review if necessary.
Semi-Automated Tools
- IDEs (e.g. IDEA)
- Linters (e.g. ESLink)
- Static analysis (e.g. SonarQube)
Automated checks:
- Enforce common style
- Refactor common patterns
- Identify possible bugs
Aggregated vs simple metrics:
- Aggregated metrics are a overview of the quality:
- Reliability: number of issues classified as bugs
- Security: number of vulnerabilities (e.g. XSS)
- Maintainability: number of code smells
Simple metrics for targeted aspects:
- Coverage: % of code covered by unit tests (e.g. num. lines, num. branches)
- Duplication: % of duplicated code
- Comments: proportion of comments
- Code-related: e.g. cyclomatic complexity, class/method size
Refactoring and Re-engineering
Refactoring increases code quality:
- Changes a function without altering its output (e.g. extracting part of a function into a private method)
- Should be done incrementally all the time
- Should be low-risk and part of day-to-day tasks
Re-engineering fixes behavioral issues:
- Re-thinking the behavior of a part of a product (e.g. moving business logic to another component)
- Should be planned carefully
- Is a high-risk activity
Team Dynamics
Communication:
- Day-to-day communication can act as a metric for the working atmosphere
- Ensures commitment and accountability
Stand-ups:
- Daily monitoring of progress; identify roadblocks as they arise
Retrospectives:
- Safe place to debrief
- Collaboratively think about improvements
Sprints and Releases
Burn-down charts: graph mapping days remaining until end of sprint against remaining story points (or possibly hours of work left).
Alternative release burn-down chart:
- To predict number of sprints until release - bar chart with bar for each sprint.
- Scope likely to increase each sprint.
- Top of each bar = initial story points (for release) - story points completed towards release
- Bottom of each bar = added scope
- Release likely when lines connecting top and bottom of bars meet
Sprint interference chart:
- Hours spent/sprint on non-sprint backlog tasks
- Emails, meetings etc.
- Try and ensure this does not increase over time.
Remedial focus chart:
- Velocity: number of story points completed per sprint
- Story points from scope changes should be measured separately
- Should decrease over time
- If the velocity decreases over time, your code quality is likely decreasing
06. Agile Software Modelling
System architecture is a reflection of organization hierarchy.
Modelling
Used to handle complexity: lots of functionality, interactions (possible concurrent) and constraints.
Identifying components: reuse existing libraries/resources.
Abstraction levels have been raised over the years to try reduce complexity?
Representations/models/visuals are good if:
- You know the notation: syntax and keys
- You understand the context: vocabulary
- The purpose is clear
Visual representations should:
- Be unambiguous
- Clearly state what they represent
- Identify who they target
Representations are a model of reality. All models are wrong; some models are useful.
Class Diagrams
Every organization has their own business rules, their own vocabulary, and concepts (with relations to each other).
Domain concepts:
- Will be logically manipulated
- Will be stored in the database
- Should be responsible for their states/logic (encapsulation)
Class diagrams are a static representation (not business logic). The semantics of all elements must be clear and fully unambiguous.
The terminology used by the client should also be used by in the code.
Tactics
Architecture tactics:
- Relate one quality attribute to one architectural decision
- Tackle one concern at a time
Architecture style and design patterns:
- Are reusable, off-the-shelf, conceptual solutions
- Must be taken into account throughout the whole project
- May encompass multiple tactics
These shape the system design early on:
- Architecture drift can occur faster than you expect: taking the easiest path in the short term may cause issues later
- Re-engineering may be painful or impractical
Documentation
README files should state:
- Context/objectives
- Authors, contributors, version, other pointers
- Deployment procedures, testing, dependencies
- Describe content and refer to licensing
Wiki pages:
- External analysis (e.g. wireframe, architecture, decisions)
- Manual tests (and results), DoXs, action points
- Organized into categories
- Should have a landing page
- Should be updated
- See ADR, MADR
Stories as Use-Case Scenarios
From a story it is difficult to:
- Put users in relation with features
- Identify system boundaries (e.g. interfaces, APIs)
- Evaluate what domain entities are created/read/updated
- Understand business/logical processes (e.g. controls)
- Understand entity properties
Robustness diagrams:
웃 ------ |-o -----> ⥀ ----- o̲ ◆-----> o
actor boundary control entity property
- Boundary: interface
- Control: process/method of the interface
OR:
- Entity/boundary/control = MVC
Example: As a user, I want to search for events by their types, location or date so that I will be able to subscribe to them later.
user event interface subscribe
웃 -------- o-| -------> ⥀
| |
| |
v event
search ⥀ ---------- o ◆--- o location
◆ ◆
| \
date \
type
Partial scenarios
As a user, I want to do x so that value: the value does not need to be implemented in the story, the story acts as justification for the task.
Wireframes
- Identify details in forms
- Domain concepts and attributes
- Allows discussion of general layout
c.f. robustness diagrams:
- Controls: what will be supplied between subprocesses
- Entities:refine semantics of model elements
- Transitions: from screen to screen
07. Testing and Mocking
| While designing | Prepare |
|---|---|
| Requirements | Acceptance tests |
| System requirements | System tests |
| Global design | Integration tests |
| Detailed design | Unit tests |
Objectives
Testing shows the presence of bugs, not their absences.
Two main objectives:
- Validation: check that the software fulfils its requirements
- Verification: identifying erroneous behavior
Determine if it is fit for purpose:
- Purpose: is it safety critical? Go through formal validation
- Expectations: do users expect it to be polished?
- Marketing: time-to-market and prices
Staged Testing
Run unit tests, then component tests, then system tests (scenario-based user testing).
Unit Testing
- Every feature should be testable and tested
- Each piece of code should be self-sufficient
- Use fake/simulated inputs
- Avoid human/third-party interactions
Methods should also prevent missuses:
- Regression tests
- Explicit verifications of pre/post conditions at component boundaries
Be skeptical:
- Identify domains for values that should have the same effects
- Consider edge cases (boundaries of domains)
Component Testing
Various types of interfaces:
- Application behavior through operations such as method calls
- Shared memory between processes
- Messages being passed through some communication medium
Be skeptical of input data:
- Make components fail (differing failures - stop failures from cascading between components)
- Stress testing (e.g. message overflow)
- If a call order exists, try call operations in a different order
System Testing
- Integrate third party components/systems
- Should be performed by dedicated testers, or at least, not only developers
Scenario-based testing:
- Main usages (full interaction flows)
- Tests should touch all layers (e.g. the GUI)
Trace and record test executions in some structured way:
- Input values, expected output, observed output
- Include metadata like who, when, issue IDs
Agile Testing
For each commit run:
- Automated unit tests
- Automated story tests
Peer reviews should also be done before merging.
Any successful builds should become a release candidate.
Run manual tests on candidate at end of each sprint.
Run automated performance tests before each release.
Assertion clauses:
- Check parameter values
- Ensure invariants are actually invariants
- Useful for regression testing
Guideline-based testing identifies common programming mistakes (e.g. null values) and ensures tests are performed on these aspects.
Acceptance Test-Driven Development
User stories accompanied by acceptance criteria.
To automate the tests:
- Define application interfaces; isolate the UI
- Use dependency injection (inversion of control)
- Fake minimal implementations for dependencies (stubs) or databases
- Split asynchronous scenarios into synchronous ones
- Fake time where possible (e.g. latency, scheduling)
Automated Acceptance Tests
Use the same path as the users; playback tools like Selenium test directly on the GUI, but may be fragile and time-consuming.
Hence, decoupling the UI from business logic is useful.
Acceptance criteria can be tested using Cucumber.
Sprint Reviews
Prepare and plan sprint reviews:
- Ensure acceptance criteria are all running
- Acts as a last-minute verification test (smoke tests)
- May need to rework some stories
- Prefer rolling back to previous candidate rather than fixing bugs
Capacity, Load and Stress Testing
Quality requirements are often difficult to test (e.g. maintainability, auditability are not testable).
Capacity-focused requirements can be tested in a semi-automated fashion: response time can be expressed as user stories or as required system features.
08. Reliability, Resilience and Security
Ethics
Equality: same treatment for everybody
Equity: customized treatment to ensure everyone has the same opportunity
Algorithms and AI: garbage in, garbage out. If the dataset it is fed is biased, the output will be biased.
ACM code of ethics. TL;DR: respect everyone + make mistakes and reflect on your mistakes.
Reliability and Resilience
Faults, errors and failures:
- Human error: invalid input data causing the system to misbehave
- System fault: something that leads to an error
- System error: visible effects of misbehavior
- System failure: when system returns bad results
To improve reliability:
- Fault avoidance: design, development process, tools and guidelines
- Detection and correction: test, debug, validate
- Fault tolerance: designing system to handle/recover from failures
Availability and reliability:
- Availability: probability of being able to successfully access the system
- Reliability: probability of failure-free operation
Works-as-designed problem:
- System specification may be wrong (not reflecting user truth)
- System specification erroneous (typos, not proof-read etc.)
Reliability can be subjective, affecting only a subset of users:
- Errors can be concentrated in a specific part of the system
- Responses may be slow at a specific time or location
Capacity management:
- Thing carefully about software architecture, especially for I/O
- Ues threads carefully (starvation and deadlocks)
- Write dedicated tests and monitor production systems
Architectural strategies:
- Protection systems
- Systems that monitor the execution of others
- Trigger alarms or automatically correct the behavior
- Multiversion programming
- Concurrent computation
- Hardware with different items/providers
- Software with different development teams
- Voting systems e.g. triple-modular-redundancy
- Concurrent computation
Reliability Guidelines
- Visibility: need-to-know principle; if variables/methods don’t need to be exposed, don’t expose them
- Validity:
- Check format and domain of input values (including boundaries)
- Use if statements or regression-test-enabled assert statements
- Avoid errors becoming system failures by capturing them; never send back an error message with the stack trace
- Erring:
- Avoid untyped languages
- Encapsulate ‘nasty’ stuff
- Restart: provide recoverable milestones so that it can restart into a good state
- Constants: express fixed or real-world values with meaningful names
Unicorns:
Security treats come from:
- Ignorance: unknown risks
- Design: security is disregarded
- Carelessness: bad design
- Trade-offs: lack of emphasis on security
The Four Rs of a Resilience Engineering Plan
- Recognition: how an attacker may target an asset
- Resistance: possible strategies to resist each threat
- Recovery: plan data, software and hardware recovery procedures
- And test the restore procedure
- Reinstatement: define the process to bring the system back
Resilience planning:
- Identify resilience requirements: minimum functionality while being attacked
- Backup and reinstatement procedures
- Classifying critical assets; how should they work in degraded mode
- Test it! An untested backup is as bad as no backup
Checklist:
- Permissions: ensure file, user permissions are correct
- Session: terminate long-running, inactive user sessions
- Overflows: take care around memory overflows
- Password: require strong passwords
- Input: sanitize inputs
09. Continuous Integration
Good programmers write code that humans can understand
The Integration Problem
When combining units into a product (after testing each unit individually), problems occur with integration:
- Conflicting dependencies
- Badly-defined or -specified APIs
Diagnosis gets harder as more units are combined.
Continuous Integration
Working software is the primary measure of progress
Agile Manifesto, 7th Principle
Integrate as you develop; when a story or unit is ready, integrate it immediately with master.
The smaller a chunk of code is, the easier it is to test and the easier it is to integrate.
Fowler’s Principles
- Single source repository
- Automate the build process
- Make the build self-testing
- Everyone commits to master/main every day
- Every commit trigger a build on an integration machine
- Broken builds should be fixed immediately
- Keep the build fast
- Test in a clone of the production environment
- Make it easy for everyone to get the latest executable
- Ensure everyone can see what is happening
- Automate the deployment phase
Single Source Repository
All code and resources should be in a single place:
- Team may be decentralized
- Centralizes traceability of changes
Version Control:
- Centralized; one master repository e.g. subversion
- One main trunk, branches available
- Distributed; two levels of commit e.g. git
Driessen’s Branching and Merging Strategy
- Master: production-ready branch
- Development: branch for in-progress work
- Implementation: branches for each story/feature
- Once feature done, merge dev to feature branch
- Peer review, then merge to dev
- Hotfix: branch for a particular bug fix. Merge to master, dev and release
- Release: branch for a particular release
- Run smoke tests etc.
Everyday Commits
- Commit any meaningful progress
- Your machine may die; don’t lose valuable work
- Make commit messages meaningful
- Allows the team to see your progress
- If your code does not build
- Push partial implementations
- Evaluate if the task was under-estimated
Merge Requests and Code Review
Ensure:
- Code is readable
- Proper naming, documentation
- Meets expected quality
- Has unit/acceptance tests
- Has configuration/migration scripts
- Won’t cause security/compatibility problems
- Code does what it is expected to do
- Commit message makes sense and describes what it does
Merge requests increases cross-functional knowledge and helps with onboarding.
When leaving feedback, be open-minded:
- Reviewer’s logic is different from author’s logic
- If you’re confrontational, nothing will happen (or bad things will happen)
- Give suggestions to make it better, not just criticism
- Look for abstractions that can be added or removed
- Look for duplications with the existing code base
- Be critical when new dependencies are added
- Check unit/acceptance tests!
As a reviewee, remember that comments are made against the code, not you.
Be constructive and positive in giving feedback; point out what they have done right.
Managing Builds
Build after every commit. To reduce the burden of manual configuration, create automation scripts and use an integration server accessible to the team.
Self-testing: automate the build with upfront tests.
To make builds faster:
- Run only a subset of tests
- Use fake stubs to mock external services/resources
- Run tests asynchronously
Make a snapshot of the latest stable build accessible to the team.
Automated Deployments
Requires the deployment to be scripted.
Requires rollback scripts too:
- Unexpected error - even if it is tested on a clone, errors can occur
- Users can inadvertently trigger edge cases and bugs
10. Continuous Delivery, Deployment and DevOps
The Deployment Problem
The ‘It works on my machine’ problem.
Deployment Antipatterns
Doing it manually:
- Requires extensive/verbose deployment
- Relies on manual testing
- Requires calling the development team to figure out what went wrong
- Requires updating the deployment sheet since the production environment is different
- Requires manual tweaking
Waiting until the very end to deploy a release:
- The software is tested on a new platform for the first time
- e.g. firewall rules preventing something from working
- Developers and IT often separated
- The bigger the system, the more uncertainties
More Releases
Increase release frequency: you will either suffer more or find a way to make it easy.
Continuous Integration:
- Has automated builds with testing
- Enforces self-contained sources
- Requires deployment scripts to be written
- Is done many times after each commit
Benefits of Continuous Delivery:
- Empowers teams to test and deploy the build you want
- Reduces errors; avoid error-prone, un-versioned configurations
- Lowers stress: frequent and smaller changes and a rollback process
Deployment Strategies
Adequate preparation.
You need to:
- Model the target environment
- Prepare the deployment infrastructure
- Understand server configuration
- Define the deployment strategy
Automate as much as possible:
- Use third-party libraries/servers that use textual config files; avoid GUI
- Use build chains and dependency management systems (e.g. maven, gradle)
- Prepare virtual environments (e.g. docker)
Create a disaster recovery process (e.g. rolling back DB to previous schema)
Configuration Management
Keep everything under version control:
- Source files
- Configuration files including DNS zone files, firewall settings etc.
- BUT NOT PASSWORDS/SENSITIVE DATA; store these only locally (environment variables)
Think about dependencies: automated dependency retrieval is handy, but try to avoid having to ‘download the internet’.
Deployment Pipeline
- Commit stage: on the development branch, commit, test and review, run CI pipeline
- Acceptance stage: acceptance tests
- User Acceptance Tests (pre-release stage): user tests
- Capacity stage: capacity tests
- Production: smoke tests
After each stage, review the results/metadata from the pipeline.
Store the last few binaries in the artefact repository so that you can easily rollback.
Practices
Build only once:
- Even a slight difference in build environment may cause issues
- Use the build from acceptance stage
- Production servers cannot be updated as frequently so the libraries in there may be different; by making the build constant, it narrows down where the issue may be
- Deploy the same way everywhere
- Smoke-test deployments’ script the automated start-up of the system with a few simple requests
- Deploy a copy of production with the same networks, firewalls, OS and application stack
- Propagate changes into the whole pipeline as they appear; if a bug appears, go through the entire process again
Creating Deployment Strategies
- Collaborate with all parties in charge of the environments
- Create a deployment pipeline plan and configuration management strategy
- Create a list of environment variables (secrets) and a process of adding them
- A list of monitoring requirements and solutions
- Discuss when third-parties are part of the testing
- Create a disaster recovery plan
- Agree on a service level agreement (SLA) and on support
- Create an archiving strategy of outdated data
Zero Downtime Releases
Negate as much downtime as possible (or at least, during peak times).
Hotplug facilities exist:
- Decouple the system, modularizing the code so that pieces can be migrated
- Offers roll-back to the previous version if necessary
If not possible:
- Reroute web-based systems
- If data migrations are required, a data freeze may be required
Blue-Green Deployment
In distributed systems e.g. web apps, re-routing is easy and it is possible to have multiple versions available at the same time.
Have two identical environments, blue and green for each server type (e.g. web server, application server, database server), using a router to determine which slice is being used at any given time.
Canary Deployment
Real-world systems are not easily cloneable:
- Real-work constraints not fully testable
- Scalability/response-time
Hence, use canary deployment development:
- Deploy a subset of servers with the new version
- Define a subset of users that will receive the new version (e.g. by IP address)
- Gradually increase the size of the set
11. Wireframes, Mockups, Prototypes and User Experience
Basic Principles
Usability
User experience is more than usability:
- Usability: does it meet user requirements?
- Efficiency: how fast is it use? How many clicks does it require
- Satisfactory: is it simple and elegant to use
Have empathy:
- Put yourself in your users’ shoes
- What prior knowledge do they have? What is their target knowledge?
- Understand their frustrations and desires
Design makes a difference; two identically-capable websites will be distinguished by their UI
User Experience Honeycomb
Useful: is your interface serving a purpose?
Usable: is it easy to learn and use? Is the learning curve short?
Findable: can users find what they are searching for?
Credible: is the company and product trustworthy?
Accessible: is it accessible to disabled people? Aria labels, contrast, font size etc.
Desirable: does the UI look good and make it easy to use?
If it has all six, then your application has value.
What Makes a Great UX?
Not Just Style Sheets
Simplicity: do not overload the UI; white space is important
Consistency: consistent behavior (e.g. button and links have consistent behavior) and styling (colors, page layout, button names)
Dual coding: use icons and labels; just icons makes it accessible only to expert users
Don’t annoy the user: avoid video autoplay, pop-ups
Content
Appropriate content:
- Avoid long pages; infinite scrolling not suitable for all purposes
- Expired or messy information great for scaring users
- Use interactive content and infographics; just text is boring
Keep users informed:
- Give appropriate feedback (e.g. success/error messages)
- Show action progress if it is expected to take a noticeable amount of time
- Modals visually break the ‘normal’ flow; use only for critical information
UX Tools
Input controls:
- Text/pre-formatted fields
- Toggle/sliders buttons, checkboxes, radios
- Dropdowns
Navigational components: Design navigational components for your purpose; don’t just plonk it on there
If a user doesn’t understand how the navigation works, they will just leave
- Breadcrumbs
- Search fields
- Pagination
- Hyperlinks (not too many)
Information components:
- Tooltips, icons, progress bars
- Message boxes, notifications, modal windows
Plus containers - accordions, tables, tabs etc.
Infographics
- Decide what data you want to show: What do you want to tell; to whom?
- Recognize the potential of the information - its essence
- Explore possibilities; rapid hand-drawn sketches
- Choose, then create a wireframe/mock-up of your concept
- Test with users; ask specific questions about the data, not just ‘does it look good’
- Draw shapes and add interactivity if possible (e.g. buttons, animations)
- Decide on typography, taking into account readability and the visual identity
Semiology of Graphics
Visual Variables
- Size
- Shape
- Color:
- Hue
- Value
- Saturation
Misc:
- Use whitespace; a simpler design emphasizes the elements that exist
- Think responsiveness
- Avoid large banners; they take up a lot more space on mobile
Test Test Test
Wireframes: low-fidelity sketch; flow between screens. Be extra careful who you engage with; if you show the wireframes, everyone and their dog will have different nitpicks and complaints.
Mock-ups: high-fidelity representations based off of the wireframes; no interactivity. Focus on the visual identify.
Prototype: limited functional implementation (e.g. mocking) but with a functioning workflow. Concentrate on visual behavior, risky features.
12. Design Principles 1
How do we quantify ‘good’? How precise can we be? Common language required to describe this.
Big-picture language
Cohesion:
- Things are in the right place
- Data and behavior together
- Keep connections local
Coupling:
- Independence between modules
- Separation of concerns/information hiding
- Simple interfaces
Cohesion good; coupling bad.
Why is software design hard?
Problem: complexity
Solution: decomposition
- Too big? break it up
- Too many parts? Hide them
- Too many connections? Decouple it
- Changing too much? Abstract it
Methodologies
Top-down:
- Functional decomposition
- Stepwise refinement
- Transaction analysis
Bottom-up:
- Combination of lower-level components: figure out what you need, then combine them
Nucleus-centred (OO):
- Information hiding
- Decide what the critical core (algorithms etc.) is, then build interfaces around it
Aspect-oriented (e.g. security):
- Separation of concerns
Nucleus-Centred Design
Start with the tricky bits.
Hide information; decisions, choices, things that may change in the future (David Parnas).
Identify decision decisions with competing solutions; isolate these details behind an interface i.e. OO-design; state/access methods encapsulated.
Example: Student
Student StudentDialog
name <-------- student
id
display()? display()?
Where should display() go?
If in StudentDialog, allows for separation of concerns.
If in Student, keeps related data and behavior together, increasing cohesion.
Information-Hiding
Encapsulation: drawing a boundary around something.
Information hiding: hiding design decisions from the rest of the system to prevent unintended coupling.
Encapsulation = programming feature; information hiding = design pattern.
Encapsulation Leak
When implementation details get exposed, allowing implementation details to be exposed and private properties to be modified outside of the class, causing inconsistent/invalid states.
// Student
private Set<Enrolment> enrolments;
public Set<Enrolment> getEnrolments() {
// Bad
return enrolments;
// Better: items cannot be added/removed, but setters/getters can still be called on elements
// Deep clone?
return Collections.unmodifiableSet(enrolments);
}
// Course
add(Student student) {
// Bad: set is being modified outside of the Student class
student.getEnrolements().add(new Enrolment(student));
}
Coping with Change
Figure out the solid bits; the invariants, and make them the framework of your program. Hopefully, the problem won’t change.
Find the wobbly bits and hide them away. In other words, make stable abstractions.
Hiding Design Decisions
If you chose something (i.e. multiple options available), hide it:
- Data representations
- Algorithms
- IO format
- Mechanisms (inter-process communication, persistence)
- Lower-level interfaces
A DAO (data-access objects) allows us to define an interface to access and modify objects.
This allows the internal implementation details to be hidden and thus changed as needed.
Open-Closed Principle
Make your system open for extension but closed for modification.
Any new use case should be able to add to the interface without requiring it to be modified.
Tell, Don’t Ask
Decisions based entirely upon the state of one object should be made inside the object itself.
Avoid asking for information from an object and using it to make decisions about that object.
If this is happening, it may be a sign that the data is being stored in the wrong place.
Example:
m = new VendingMachine();
m.vend(3); // vending machine should check and restock itself
if (m.stock < 10) m.reorder(); // asking: bad
13. Design Principles 2
Keep it simple:
- Aim for simplicity
- KISS
- YAGNI
Leave decisions to the last responsible moment: delay the decision so that you have the most information.
Refactoring
Leave the world a better place than you found it.
Martin Fowler abridged: refactoring makes small behavior-preserving transformations, each of which are ‘too small to be worth doing’ but cumulatively have a significant effect.
Object-Oriented Design
Data modelling: focus on data internal to the object
Behavior modelling: focus on services provided to the external world
OO models both; just a question of where you start from.
Inheritance: The Dark Side
Mistakes:
- Using inheritance for implementation
- ‘Is a-role-of’
- Becomes
- Over-specialization
- Violating the LSP
- Changing the superclass contract
Principle: if it can change, it isn’t inheritance.
Using Inheritance for Implementation
Favour composition over inheritance.
e.g. instead of stack inheriting from vector, stack should have the stack as a private variable; the data store for the stack.
‘Is a-role-of’
Example inheritance tree:
Person
Student Staff
Postgrad Tutor Admin Lecturer
Professor
What if a person is both a postgrad and tutor?
Instead, have each Person have multiple Roles. In the real world, when a person changes role, the person doesn’t change.
This allows for separation of concerns.
The ‘Becomes’ Problem
e.g. EligibleStudent and IneligibleStudent inherit from Student. What happens if a student becomes ineligible: the object must switch class becomes of a relatively trivial detail.
Just have eligibility as a boolean in the Student class.
Principle: Inheritance isn’t dynamic.
Over-Specialization
Method arguments etc. should use the most general interface/class you can get away with.
Violating the Liskov Substitution Principle
LSP: the behavior of a method shouldn’t change regardless of the subclass of arguments it is given.
e.g. setWidth, setHeight method for a Rectangle. Square is a subclass of Rectangle. Method taking Rectangle as argument could be given a Square: if it sets the width, the height will be set silently; area/perimeter calculations will give unexpected results.
If methods start telling white lies, you start walking along the path to hell.
Single Responsibility Principle
Every module or class should have responsibility over only one part of the functionality - this responsibility should be fully encapsulated.
Why?
- More responsibility = more reasons to change
- Prevent one class/module from knowing too much
Typically found in controllers, initializers.
Interface Segregation Principle
No client should be forced to depend on methods/interfaces it doesn’t use.
- Reduces risk of ‘white lies’ that one day turn out to bite you in the face
- Split interfaces into cohesive, smaller, more specific ones
- Reduce coupling
e.g. iPhone interface can be split into widescreen iPod, phone, internet communicator device interfaces.
Dependency Inversion Principle
1: When high-level modules depend on low-level modules; both should depend on abstractions.
2: Abstractions should not depend on details; details should depend on abstractions.
i.e. high-level objects should not depend on low-level implementations. Use abstractions; no one likes micro-managers.
When things change, you want as little stuff around it to change.
To do this, try to avoid new; explicitly instantiating concrete instances.
Dependency Injection
Passing objects (that conforms to the broadest interface you can get away with) through a constructor rather than creating a concrete instance of it yourself.
SOLID Principles
Single responsibility.
Open-closed principle.
Liskov substitution principle.
Interface segregation principle.
Dependency inversion.
14-19. Design Patterns
Object-Oriented-Design Experience
High-level: structural models (layered, client-server etc.), control models, pipes & filters.
Idioms: getters/setters, while(&dst++ = *src++) etc.
What comes in between? OOD wisdom - collections, event loops/callbacks, MVC etc.
Sidenote: Literate Programming: Tangle & Weave
The order in which code makes sense to read is not the same the compiler wants to receive - so weave up the documentation and tangle up the source code.
Talk about different parts of the program from different directions - some things are best described as code, some as text, some as diagrams.
What are Design Patterns
- Distilled wisdom about a specific problem that occurs frequently in OO design
- A reusable design micro-architecture
The core is a simple class diagram with extensive explanation - documentation for an elegant, widely-accepted way of solving a common OO design problem.
Design patterns are:
- Reusable chunks of good design
- Solutions to common problems
- Not a perfect solution, but balances forces
- Essential for OO developers
A definition for ‘Design Pattern’
A solution to a problem in a context
Forces
Correctness: completeness, type-safety, fault-tolerance, security, transactionality, thread safety, robustness, validity, verification etc.
Resources: efficiency, space, ‘on-demand-ness’, fairness, equilibrium, stability etc.
Structure: modularity, encapsulation, coupling, independence, extensibility, reusability, context-dependence, interoperability etc.
Construction: understandability, minimality, simplicity, elegance, error-proneness, etc.
Usage: ethics, adaptability, human factors, aesthetics etc.
Resolution of Forces
Impossible to prove a solution is optional; make argument backed up with:
- Empirical evidence for goodness
- Rule of 3: don’t claim something is a pattern until you can point to three independent usages
- Comparisons with other solutions (possibly failed ones too)
- Independent authorship
- Don’t be the second person after the inventor to use it
- Reviews
- By independent domain and pattern experts
Documenting Patterns
- Name
- Intent: a brief synopsis
- Motivation: context of the problem
- Applicability: circumstances under which the pattern applies
- Structure: class diagram of solution
- Participants: explanation of classes/objects and their roles
- Collaboration: explanation of how classes/objects cooperate
- Consequences: impact, benefits, liabilities
- Implementations: techniques, traps, language-dependent issues
- Sample code
- Known uses: well-known systems already using the pattern
Documenting Pattern Instances
- Map each participant in the GoF pattern to its corresponding element
- Interface/abstract
- Concrete
- Association
The Gang of Four
The design patterns book authored by Gamma, Helm, Johnson and Vlissides containing a catalog of 23 design patterns, each being a creational, structural or behavioral pattern.
Creational Patterns
Abstract Factory (AKA Kit)
Interface for creating families of related/dependent objects without specifying their concrete classes.
e.g. when making UI elements, want scrollbars, buttons etc. to all use the same theme.
public abstract AbstractFactory {
public AbstractProductA createProductA();
public AbstractProductB createProductB();
}
public class ConcreteFactory1 extends AbstractFactory {
public AbstractProductA createProductA() {
return new ConcreteProduct1A();
}
...
}
Builder
Separates construction of a complex object from its representation so that the same construction process can create different representations.
Factory Method (AKA Virtual Constructor)
Define interface for creating an object and let subclasses decide which concrete class to instantiate - allows instantiation to be deferred to its subclasses.
Problem: code that expects an object of a particular class doesn’t need to know which subclass the object belongs to (as long as it follows the LSP).
The exception to this rule is when creating a new object - you must know its exact class. Hence, ‘new’ is glue.
Broken polymorphism:
if (isWizard()) weapon = new Wand()
else if (isFighter()) weapon = new Sword();
Solution:
- Move the ‘new’ method into an abstract method
- Override that method to create the right subclass object
public abstract Creator {
public abstract Product factoryMethod();
}
public class ConcreteCreator implements Creator {
public Product factoryMethod() {
return new ConcreteProduct();
}
}
Creator
factoryMethod()
doSomething()
△
|
|
ConcreteCreator
factoryMethod()
Note that it is common to have more than one factory method.
Parameterized factory methods can produce more than one type of product, add constraints/details etc.
public makeWeapon(Type type) {
if (type == Weapons.DAGGER) return new Dagger();
...
}
public makeWeapon(Owner self) {
if (self.height() > 180) return BigDagger();
...
}
Prototype
Specify the kinds of objects to create using a prototypical instance, creating new objects by copying the prototype.
Singleton
Intent: ensure class only has one instance and provide a global point of access to it.
Problem:
- Some classes should only hve one instance
- How can we ensure someone doesn’t construct another
- How should other code find that one instance?
Solution:
- Make the constructor private
- Use a static attribute in the class to hold the one instance
- Add a static getter for the instance
<<singleton>>
Singleton
private $uniqueInstance
public $getInstance()
private Singleton()
(NB: $ means static)
Use lazy initialization approach in the getInstance method.
Issues:
- Sub-classing the singleton is possible but difficult; may not be worth using this pattern
- Thread safety -
getInstanceshould be synchronized anduniqueInstancebe made volatile
Structural Patterns
Adapter
Converts interface of a class into an interface the client expects - allows classes with incompatible interfaces to work together.
Bridge
Decouple an abstraction from its implementation so they can be varied independently of each other.
Composite
Problem: objects contain other objects to form a tree, but want client code to be able to treat composite and atomic objects uniformly.
e.g. Person has eat(FoodItem) method where Bread etc. is a FoodItem. Meal is composed of multiple FoodItems so a new method, eat(Meal) is required.
Solution: create abstract superclass that represents both composite and atomic objects.
Used in Swing’s JComponent.
public class Client {
private Component component;
}
public abstract Component {
public doSomething();
/*
These methods are sometimes defined in Composite
*/
public add(Component);
public remove(Component);
public getChild(int);
}
public class Leaf {
public doSomething();
public add(Component component) {
throw new NotImplementedException();
// Or:
return false;
}
...
}
abstract class Composite extends Component {
private List<Component> components;
public doSomething() {
for(Component component: components) {
component.doSomething();
}
}
public add(Component component) {
components.add(component);
}
public remove(Component component) {
components.remove(component);
}
public getChild(int index) {
return component.get(index);
}
}
Notes:
- GoF has
addetc. methods being abstract; Head First book has them being concrete withNotImplementedexceptions - Easy to add new components
- Common for child to know its parent
- Can make containment too general
Decorator
Attaching additional responsibilities to an object dynamically - an alternative to subclassing for extending functionality. It allows you to extend the existing functionality but not add new public methods.
You can only have one subclass at a time (C++'s multiple inheritance leads to hell) but you can have multiple decorators at the same time: composition over inheritance.
Solution: use aggregation instead of subclassing.
abstract Component {
public void doSomething();
}
public class ConcreteComponent extends Component {
public void doSomething() {
...
}
}
public class Decorator extends Component {
protected Component component;
public void doSomething() {
component.doSomething();
}
}
public class ConcreteDecoratorA extends Decorator {
public void doSomething() {
super.doSomething();
addedBehavior();
}
private void addedBehavior();
}
public class ConcreteDecoratorB extends Decorator {
private State addedState;
public void doSomething() {
super.doSomething();
}
}
Example: Swing’s JScrollPane can be attached to any pane.
Notes:
- Concrete decorators knows the component it decorates
- Business rules: e.g. number, order of decorations
Façade
Providing an unified interface to a set of interfaces in a subsystem - a higher-level interface to make the subsystem easier to use.
Flyweight
Using sharing to support large numbers of fine-grained objects efficiently.
Proxy
Providing a surrogate/placeholder for another object to control access to it.
Behavioral Patterns
Chain of Responsibility
Avoid coupling the sender of a request to its receiver by allowing more than one object to handle the request - chain receiving objects and pass the request along the chain until an object handles it. (e.g. errors bubble up until it gets handled).
Command (AKA Action, Transaction)
Intent: encapsulate the request as an object to
- Parametrize clients with different requests
- Queue/log requests
- Support undoable operations
Participants:
Command:- Declares interface for executing the operation
ConcreteCommand:- Implements
execute - Defines binding between receiver object and action
- Implements
Client- Creates
ConcreteCommandand sets receiver
- Creates
Invoker:- e.g. Button or some other UI element
- Asks the command to carry out the request
Receiver:- Knows how to carry out the request (the target of the command)
// e.g. remote control
public class Invoker {
private Command command;
constructor(Command command) {
// e.g. light toggle command
this.command = command;
command.execute();
}
}
}
public abstract Command {
void execute();
void unexecute();
}
public ConcreteCommand implements Command {
private Receiver receiver;
public ConcreteCommand(receiver) {
this.receiver = receiver;
}
public void execute() {
receiver.action();
};
public void unexecute() {};
}
public class Receiver() {
// e.g. smart light switch
public void action();
}
public class Client {
private Receiver receiver;
private ConcreteCommand command;
Client() {
receiver = new Receiver();
command = new ConcreteCommand(receiver);
}
}
From reading Refactoring Guru’s Command pattern page:
// Receiver: where the business logic lives
class Data {
}
// Client: configures commands, passes them to invokers
class App {
constructor() {
this._data = new Data();
this._history = [];
this._stuffCommand = new deleteCommand(this._data);
this._stuffButton = new Button("Delete", () => this._execute(this._stuffCommand))
document.body.appendChild(this._stuffButton);
}
_execute(command) {
command.execute();
this._history.push(command);
}
_unexecute() {
if (this._history.length) this._history.pop().unexecute();
}
}
// ConcreteCommand: command which calls business logic
class StuffCommand {
constructor(data) {
// Gets data in the constructor or on its own
this._data = data;
this._rand = Math.random();
}
execute() {
// Put command in queue, etc.
this._snapshot = this.data.snapshot();
this._data.doStuff(this._rand);
}
unexecute() {
// Generic so can be done in app, but more efficient method will need to know what changed
// and how to undo it, so must be within the command
this._data.restore(this._snapshot);
}
}
// Invoker
class Button {
constructor(text, command) {
this.text = text;
this.command = command;
this._button = document.createElement("button");
this._button.addEventListener("click", () => this.command.execute());
}
}
Interpreter
Given a language, define a representation for its grammar and an interpreter that uses this to interpret sentences.
Iterator (AKA Cursor)
Problem:
- Sequentially access elements of a collection without exposing implementation
- Allows for different types of traversals (e.g. ordering, filtering)
- Allow multiple traversals at the same time
Solution:
- Move responsibility for traversal from collection to an Iterator object, which stores the current position and traversal mechanism
- The collection creates an appropriate iterator
public interface Collection {
public Iterator createIterator();
}
public interface Iterator {
public Element first();
public Element next();
public boolean isDone();
public Element currentItem();
}
public class ConcreteCollection implements Collection {
public ConcreteIterator createIterator() {
}
}
public ConcreteIterator implements Iterator {
...
}
<<interface>> <<interface>>
Collection Iterator
createIterator() -------> first()
△ next()
| isDone()
| currentItem()
| △
| |
ConcreteCollection ConcreteIterator
The set iterator in Java does not have a first method as there is no guaranteed ordering.
for(Collectable c: someCollection) {
}
// Implicitly does:
Collectable c;
Iterator<Collectable> iterator = someCollection.iterator();
while(iterator.hasNext()) {
c = iterator.next();
// But the explicit version can also do
if (c.val == 10) {
iterator.remove();
}
}
Mediator
Define an object that encapsulates how a set of objects interact with each other - reduces coupling by keeping objects from referring to each other explicitly.
Memento
Capture and externalize an object’s internal state (without violating encapsulation) so that it can be restored to this state.
Observer (AKA Publish-Subscribe, Dependents)
One-to-many dependency between objects so that all dependents are notified when an object changes state.
Problem: separate concerns into different classes while avoiding tight coupling and keeping them in sync (e.g. separating GUI code from model).
Solution:
- Separate into Subjects and Observers
- The Subject knows which objects are observing it but nothing else
- When the Subject changes, all Observers are notified
Subject Observer
attach(Observer) 0..*
detach(Observer) -----------> update(Observable, Object)
notify()
△ △
| |
| 1 |
ConcreteSubject <------------ ConcreteObserver
doSomething() subject update()
doSomething will call notify
In Java:
- Subject is class called
ObservablewhileObserveris an interface - Depreciated
- Adds a ‘dirty’ flag to avoid notifications at the wrong time (e.g. transactions in progress?)
JavaFX introduced java.beans.PropertyChangeSupport:
private pcs = new PropertyChangeSupport(this);
addListener(PropertyChangeListener)
removeListener(PropertyChangeListener)
pcs.firePropertyChange(String, oldVal, newVal)
class Observable {
private PropertyChangeSupport pcs;
public Observable() {
pcs = new PropertyChangeSupport(this);
}
addListener(PropertyChangeListener pcl) {
pcs.addPropertyChangeListener(pcl);
}
removeListener(PropertyChangeListener pcl) {
pcs.removePropertyChangeListener(pcl);
}
doSomething() {
pcs.firePropertyChange(name, oldVal, newVal);
}
}
class Observer implements PropertyChangeListener {
public propertyChange(PropertyChangeEvent pce) {
}
}
Notes:
- Changes are broadcast to all Observers; each needs to decide if it cares about a particular change
- Observers don’t know about each other; complex dependencies and cycles are possible
- Observers aren’t told what changed
- Figuring out what changed can take a lot of work and may require them to retain a lot of the Subject’s state
- A variant allows the update method to contain details about what changed - more efficient, but tighter coupling
- The subject should only call
notifywhen it is in a consistent state (i.e. transaction has ended)- Beware subclasses calling base class methods that call
notify()
- Beware subclasses calling base class methods that call
State
Intent: allow objects to alter behavior when their internal state changes - the object appears to have changed class.
TL;DR treat objects as FSMs that change their behavior depending on state.
Need a state chart representation of object behavior e.g.
admin approved
o draft <-------------------
^ expired |
| submit/ v
| review failed published
v ^
review ___________________|
admin reviewed
When a new stage gets added, what happens to existing object?
The pattern:
- State-specific behavior encapsulated in concrete classes
- State interface declares one or more state-specific methods - all methods should make sense for all concrete states
- Context communicates via the state interface
Implementation:
- The context and concrete states can perform state transitions by replacing the context’s state object
- If transition criteria are fixed then it can be done in the context - this allows concrete state classes to be independent
- Initial state is set in the context
- Explicit transitions prevents getting into inconsistent states
public abstract State {
void handle1();
void handle2();
}
public class ConcreteStateA extends State {
private Context context;
// May possibly be in the abstract class
public void setContext(Context context) {
this.context = context;
}
public void handle1() {
State state = new ConcreteStateB();
context.changeState(state);
}
public void handle2() { }
}
public class ConcreteStateB extends State {
public void handle1() { }
public void handle2() { }
}
public class Context {
State state;
public Context(State initialState) {
state = initialState;
}
public void changeState(State state) {
this.state = state;
}
public void request1() {
...
state.handle1();
};
public void request2() {
...
state.handle2();
}
}
Strategy (AKA Policy)
Intent: define a family of algorithms, encapsulating each one and making them interchangeable - Strategy lets the algorithm vary independently from the client that uses it.
Problem: change an object’s algorithm dynamically, rather than through inheritance.
Solution: move the algorithms into their own class hierarchy (composition over inheritance).
Used by AWT/Swing layout managers.
public abstract Strategy {
protected Context context;
public Result algorithm();
}
public class ConcreteStrategyA extends Strategy {
public Result algorithm() {
...
}
}
...
Notes:
- Context needs to know what strategies exist so it can pick one
- Strategies need to access relevant context data
Template Method
Define the skeleton of an algorithm, deferring some steps to subclasses - allows subclasses to redefine certain steps of an algorithm without changing its structure.
Problem: implement the algorithm skeleton but not the details
Solution: put the skeleton in an abstract superclass and use subclass operations to provide the details
public abstract AbstractClass {
public final void templateMethod() {
}
abstract protected void primitiveOperation1();
abstract protected void primitiveOperation2();
}
public class ConcreteClass extends AbstractClass {
protected void primitiveOperation1() { }
protected void primitiveOperation2() { }
}
Hooks:
- Can include hooks so that subclasses can hook into the algorithm at suitable points - they are free to ignore the hook
- Hook declared in abstract class with empty or default implementation
e.g. PrepareMeal abstract class may have isTakeAway hook. Subclass could call it during their assemble to change how the meal is assembled depending on if it is an eat-in or takeaway meal.
Visitor
Represent an operation to be performed on elements of an object structure - allows new operations to be defined without changing the classes of the elements it operates on.
Pattern Language
Alexandrian Patterns
Each pattern describes a problem which occurs over and over again in our environment, and then describes the core of the solution to that problem, in such as way that you can use this solution a million times over, without ever doing it the same way twice
Christopher Alexander, A Pattern Language
A set of interrelated patterns all sharing some of the same context and perhaps classified into categories.
Abstract Factory meets Singleton
class ConcreteFactory extends AbstractFactory {
private ConcreteFactory instance;
public static getInstance() {
if (instance == null) instance = new ConcreteFactory();
return instance;
}
public ConcreteProductA getA() {
return new ConcreteProductA();
}
}
Iterator meets Factory Method
interface AbstractCollection {
/**
* This method is a factory method that returns a product
*/
Iterator iterator();
}
interface Iterator { ... }
public class ConcreteCollection implements AbstractCollection {
ConcreteIterator iterator();
}
public class ConcreteIterator implements Iterator { ... }
20. Design by Contract™
Preconditions: before calling service, client must check everything is ready.
Invariants: something true both before and after the service is called
Postconditions: promises the service makes that should be met after the service finishes
e.g.
public class Stack<T> {
/**
* Precondition: stack not empty
* Postcondition: stack unchanged, last pushed object returned
*/
public <T> peek();
/**
* Precondition: stack not empty
* Postcondition: stack size decreased by 1, last pushed object returned
*/
public <T> pop();
/**
* Precondition: stack not full
* Postcondition: stack sized increased by 1, `peek() == o`
*/
public void push(T o);
}
The contract for a class is the union of the contracts of its methods.
Testing
Contracts inform testing; assertions allow us to check:
- If preconditions hold
- If invariants are invariant
- If postconditions hold
- If there are any side effects
- If there are any exceptions
Preconditions/
Input Values
|
v
|--------------------|
| Software Component |
| |
| Errors/Exceptions | --> Side Effects
| |
|--------------------|
|
v
Postconditions/
Output Values
Inheritance
Subclasses may have different pre/post conditions. Would require checking if the object is an instance of the subclass to determine if preconditions are met - this breaks the LSP.
Contracts are inherited:
- Preconditions can be loosened
- Postconditions and invariants can be tightened
That is, require no more, promise no less.
Hence, instead of saying that ‘Bar is-a Foo’, we can more formally say 'Bar conforms to the contract of Foo.
Guidelines
- No preconditions on queries; it should be safe to ask a question
- No fine print
- Don’t require something that the client can’t determined; preconditions should be supported by public methods
- But the client doesn’t need to verify postconditions
- Use real code where possible
- But use English where you can’t show all the semantics
- No hidden clauses; the preconditions should be sufficient and complete
- No redundant checking; the server shouldn’t verify preconditions are met
Specifying Contracts
Eiffel:
- Keywords in method declaration: require and ensure
- Standardized and multiple projects to add language support
Informally, Java has assert expression: message; statements that are disabled by default (-ea flag required )
Philosophy for Exceptions
Java Exceptions iff a contract violation occurs.
Handling violations:
- Try fix the problem
- Try an alternative approach
- Clean up and throw an exception
- Clean up: release resources, locks, rollback transactions etc.
Hence, exceptions must be caught anywhere where clean up is needed.
Interfaces are contracts
- When a contract can be recognized independently from an implementation, an interface should be considered
- Interfaces can be composed; one class can implement many interfaces
- Interfaces can be extended to specialize contracts
Inheritance: The Dark Side
- Inheritance for implementation; child has no intention of honouring the inherited contract
- ‘Is-a-role-of’: merges two contracts
- ‘Becomes’: switching contracts
- Over-specialization: contract more specific than necessary
- Violating the LSP; breaks the contract
21-22. Code Smells
Process
- Sniff
- Prioritise and evaluate
- Refactor:
- Split
- Join
- Move
- Extract
- Rename
- etc.
How large is large?
- Smells are inherently subjective
- Easier to detect in code we know
- Can be informed by measurements - software metrics
- Define quantities that represent code qualities we want to understand
- Gather data
- Analyze results, statistics
- Percentiles, min/max, outliers etc.
Morphology
Fan-out:
- How many other methods/functions does the method call
- Is it too ‘big’?
Fan-in:
- How many others call the method
- How reusable is it?
Method Length
Metrics:
- Lines of code
- Number of statements
- Comments included?
- Declarations included?
- Whitespace included?
- Amount of logic: cyclomatic complexity
- Number of branches
- Amount of nesting
Long methods a problem because:
- Method may be doing too many things
- Single responsibility principle
- Cohesion
How long is too long? Use:
- Metrics
- Counting rules
- Distribution analysis
- e.g. statement count vs cyclomatic complexity; find and refactor outliers
Object-Oriented Metrics
Chidamber & Kemerer suite commonly used:
- Weighted Methods per Class (WMC): class size
- How many methods are there? Takes into account constructors, overloads etc.
- Number of Children (NOC): structure
- Depth in Inheritance Tree (DIT): structure
- Coupling Between Objects (CBO): dependencies
- Response For Class (RFC): message passing
- Lack of Cohesion Of Methods (LCOM): property/method interactions
Large Class Smell
Knows too much, does too much; violating single responsibility principle
i.e. a God class
Break into smaller cohesive classes: extract class, extract interface
Long Parameter Lists
- Interface segregation
- If the parameter list is long, what does it tell us about the method?
- Single responsibility
- God method?
- How complex is it?
Solutions:
- Introduce a parameter object
- e.g. replacing start/end date with date range object
- Preserve the whole object
- Instead of dissecting and sending particular properties of an object, just send the whole object
- Replace the parameter with a method call
- Caller should let the method get the data itself, rather than doing it for it
Duplicated Code
- ‘Once and only once’
- Same problem may be solved in multiple ways (different programmers)
Want a single place of truth.
Solution: extract it into a method
Message Chains
bla.getSubProperty().getSubProperty()...
Bad because:
- Long message stack
- Complexity
- Cause dependency between classes in the chain
- Changes in any relationship causes cascading changes
Law of Demeter: only talk to immediate friends.
Method m() of object o should only invoke methods of:
oitself- Parameters of
m - Objects created within
m - Properties (attributes, direct components) of
o
Dead/Unreachable/Deactivated/Commented Code
Dead code:
- Source code that might be executed but the result of which is never used
- Unreferenced variables/functions not dead code; automatically removed by compiler/linker
Unreachable code:
- Code that can never be reached (e.g. switch statement, after return statement)
- No control flow path to the code; harder to read, takes up more memory, cache
Deactivated code:
- Code that can’t be executed now e.g.
#if os(iOS)
Commented code:
- Misleading, difficult to read and maintain
- Why was it commented out? Can it be deleted?
- What code can we trust?
Switch Statements
Large switch/if statements:
- What does it mean?
- Adds conditional complexity
- May call methods all over the codebase
- OOP should rarely use switch statements; use polymorphism
Solutions:
- Switch on types? Use polymorphism
- Switch on type code? Replace type code with subclasses
- Checking against null? Introduce null objects
Can leave it alone if it just performs some simple actions
Comments
- Integrity: are comments up to date? Can they be trusted
- Why are comments needed? Is it unreadable? Doing too many things?
- Comments vs Javadoc vs Git commits
Solution:
- If expression is difficult to understand, extract it to a variable
- If the code is difficult to understand, extract it to a method
- Rename method to make it more precise
Names
Type Embedded in Name
Found mostly in old code. e.g. strFirstName
What happens if type is changed? Decisions should be hidden.
Uncommunicative Names
Should be descriptive, succinct, and have consistent names.
Speculative Generality
When you make general solutions because you speculate/anticipate what you might need in the future: do not speculate about tomorrow’s problems.
YAGNI, so don’t over-engineer your solution.
But at the same time, need to balance this with planning or extensibility.
Solutions:
- Collapse hierarchy (e.g. if you have a bunch of classes that don’t currently do anything)
- Inline class (opposite of extracting class)
Inappropriate Intimacy
How much does a class need to know about another? Ideally little; low coupling is preferable.
Solutions:
- Move functions; keep data nad methods together
- Change bi-directional association to unidirectional
- Does a key need to know about the key chain it’s on?
- Replace superclass with delegate (composition over inheritance)
Indecent Exposure
- Every variable and method should be private unless it has to be public
- Hide your decisions
- Don’t worry about efficiency: only move to direct access if informed by performance monitoring
Feature Envy
Method making extensive use of another class (e.g. envious of their methods and which they had them).
Cohesive elements should be in the same module/class.
Shotgun Surgery
When making a change requires splattering lots of small changes across a large swath of the system; changes should be localized.
This probably means the single responsibility principal has been violated.
Solutions:
- Move methods/data
- Create a new class
Test Smells
Hard to test code:
- Highly coupled
- Asynchronous/multi-threaded
- GUI
- Buggy tests
Obscure tests:
- Difficult to understand
- Too much/too little information
- Eager tests; testing too much functionality
- Irrelevant information
- Hard-coded test data
- Indirect tests
Production bugs:
- Too many bugs getting to production
- Is there test coverage? Are the tests good?
- Are we checking or testing?
- Humans test, machine check
- Are tests covering all possibilities
- Are tests buggy?
High maintenance:
- Tests need to be modified often
- Are tests too complex, obscure?
- Single responsibility principal
- Test duplication?
Fragile tests:
- Interface too sensitive (e.g. GUI tests sensitive to resizing)
- Context too sensitive
- Pass condition sensitive to minor changes
- Pass condition sensitive to date/time, server state
Erratic tests:
- Failing for no reason
- Works on only some environments
- Be wary of conditions in tests
SENG301 Exam Notes (01-13)
Methodologies
Waterfall: Build it Twice
- Requirements definition
- System/software design
- Implementation/unit testing
- Integration/system testing
- Ops/maintenance
Spiral
- Determine objectives
- Evaluate alternatives, mitigate risks
- Implement/test
- Plan next iteration
Agile
Core Values
- Individuals and interactions over processes and tools
- Working software over comprehensive documentation
- Customer collaboration over contract negotiation
- Responding to change over following a plan
Principles
- Satisfy the customer: early and continuous delivery
- Embrace changing requirements
- Deliver working software frequently
- Business + developers work together
- Trust and support motivated individuals
- Face-to-face is the best communication medium
- Working software is the primary measure of progress
- Sustainable development; a marathon, not a sprint
- Continuous attention to technical excellence and design
- Simplicity; maximizing the amount of work done
- The best architecture, requirements and designs come from self-organizing teams
- The team should regularly reflect on how to become more effective and then act on this
Scrum
Values:
- Openness to feedback and ideas
- Focus; avoid distractions
- Respect others, even when things go wrong
- Courage to take risks and fail
- Commitment to the team and project
Team:
- PO represents client, prioritizes backlog
- SM:
- Coach: accessible, act fairly, confident, humble
- Should not immediately jump into technical issues; trust the team
- Buffer between team and management
- Dev team: self organized, cross-functional
Initial startup:
- Ask:
- Who will use it
- What it does
- Why it does it
- How it should do it
- What the goal for the product is for a month, year etc.
- Discuss objectives with stakeholders
- Create backlog
- Agree on standards
- Communication
- ‘ready’ and ‘done’
Backlog:
- Product
- Prioritized, but order can change
- No tasks
- May have estimates
- Sprint
- Ordered
- Only high-priority items can modify the sprint backlog
- Must have estimates, tasks
Process
Before SP:
- PO refines backlog, states priorities, theme/goal for the sprint
SP1:
- PO presents highest-priority stories
- Team estimates complexity, negotiates with PO and commits to stories
SP2:
- Break stories down into SMART tasks
- Specific: everyone on has understanding of task
- Measurable: meets DoD, ACs etc.
- Achievable: can do it, or ask others for help if needed
- Relevant: provides value to customer
- Time-boxed: limited to some duration - ask for help, split into subtasks, change task owner etc.
Metrics must be understood and have value to the team:
- Lean: execution time
- Kanban: task flow
- Scrum: delivering
Refactor: increases code quality; changes function without its behavior. Should be low risk and done incrementally.
Re-engineering: fixing behavioral issues
Tracking:
- Sprint burn-down chart
- Alternative release burn-down chart
- To predict number of sprints until release - bar chart with bar for each sprint
- Scope likely to increase each sprint
- Height from origin to top of each bar = initial story points (for release) - story points completed towards release
- Height from bottom of bar to origin = total added scope
- Release likely when lines connecting top and bottom of bars meet
- Interference: hours spent/sprint on unplanned tasks - should not go up
- Remedial focus chart:
- Velocity: story points completed/sprint
- Scope changes should be in a different color
- Scope changes should trend towards zero
- Total height should not go down; likely means code quality is bad
Misc:
- PO should never be surprised; means poor communication
- Burn down chart:
sprint commitment = work done + work remainingshould be an invariant
Review:
- Demo outcomes to PO and stakeholders, gather feedback
Retro. Discuss:
- Communication: intra-, inter-team
- Processes
- Scope/product vision; ensure everyone is clear
- Quality
- Environment: ensure it’s not toxic
- Skill: training/external expertise
- What are we doing well? What can we improve?
- Action items:
- Bubble method: list issues alone, pair with team member, discuss, repeat
- Circle method: list items, sorted good to bad. Group related items together
- SM decides on action items to take on in the next sprint
- Follow up on items frequently e.g. stand-ups
Smells:
- Long, un-time-boxed meetings with non-essential stakeholders
- Broken builds
- Bad tools
- Third parties; another point of failure
- Scope creep
- Unreliable POs and strategies to deal with them
- Taking over jobs, commanding others
- Blaming and shaming
- Siloing
- Poor code quality
User Stories
A promise of a conversation to be had with a hypothetical user. Should discussed between PO and dev team.
As a role, I action so that value
- One goal, one interaction
- Conjunctions, multiple use cases? Epic
- Concrete and well-defined requirements, ACs
- Vague terms, unknown data types/operations, hidden business rules? Epic
- Natural language
Users:
- From user interviews not imagination
- Persona: fictitious with clear behavior patterns and goals: user archetype
- Profile: class of user defined by their background, physical/cognitive state, education, task experience
Use Case Scenarios
User stories are partial scenarios that focus on result; use cases scenarios are more detailed and focus on specifics.
Use case scenarios can be modelled as robustness diagrams:
웃 ------ |-o -----> ⥀ ----- o̲ ◆-----> o
actor boundary control entity property
- Boundary: interface
- Control: process/method of the interface
OR:
- Entity/boundary/control = MVC
INVEST
- Independent:
- Tasks can be done in any order
- No overlap between stories; features not implemented twice
- Mock features if necessary
- Negotiable: with PO; high-level enough that dev team has freedom to discuss details with PO
- Valuable: to customer e.g. frame database-layer work in terms of value to the customer
- Estimable: good enough to allow PO to schedule/prioritize. Possibly have spike at start to estimate
- Small: big tasks hard to estimate
- Testable: ACs clear enough to write tests for
- Alternative: feature is deterministic
Kanban
More task-oriented than Scrum; less time required for initial startup.
Kanban has continuous flow and delivery and no notion of sprints; just tasks.
Principles:
- Change management:
- Kanban additional layer on top of existing processes, not an overhaul
- Pursue incremental, evolutionary change not large, sweeping changes
- Encourage acts of leadership at all levels
- Service delivery
- Focus on customer needs and expectations
- Manage the work: empower people to self-organize around the work, avoiding micro-management
- Review network of services to improve service-oriented approach
Practices:
- Visualize the workflow in a Kanban board
- Limit WiP items in each stage; avoids overloading succeeding steps
- Manage flow: movement of work through process should be sustainable and predictable
- Make process policies explicit
- Strong feedback loops: daily team meetings and/or task-focused meetings
- Improve collaboratively: make hypotheses, prove them and apply results to the organization
Metrics:
- Queue: WiP items vs items in queue
- Throughput: work units processed per unit time
- Lead time: delta between customer demand and deployment
- Cycle time: WiP / throughput
Lean
- Eliminate waste; no value to customer = waste e.g. partially done work, unnecessary code, bureaucracy, bad communication
- Amplify learning: short feedback loops And create knowledge: document decisions and reasoning
- Defer decisions; you will have more information in the future
- Quality first, not as an afterthought
- Respect people; value their opinions, communicate proactively and have some amount of conflict
- Deliver fast; identify bottlenecks, create a MVP so the customer can give feedback
- Optimize the whole; global, not local maxima
Testing
Validation vs verification:
- Validation: check it meets requirements
- Verification: identify erroneous behaviors
Fit for purpose:
- User expectations
- Marketing: price, time-to-market etc.
- Purpose: safety-critical?
Agile Testing: automatic unit/acceptance tests, plus manual testing on RCs
Traditional Testing
Unit testing:
- Test every feature
- Identify edge cases, domain boundaries
- Code should be self-sufficient
- Prevent misuse of methods
- Asserts
- Incoming values
- Invariants
- Explicit verification of pre/post conditions in component boundaries
- Asserts
Component Testing:
- Test interfaces
- Method calls
- Message passing (e.g. HTTP)
- Shared memory
- Make components fail - can you make it fail? How the rest of the system handles it
- Stress testing
- Switching up call orders - find hidden dependencies
System testing:
- Third-party systems
- Dedicated testers
Load testing: test behavior/performance in normal/extreme loads to find bottlenecks
Stress testing: under unfavorable conditions
Capacity testing: if it can handle the expected amount of traffic
Reliability and Resilience
Faults, errors, failures:
- Human error: bad input data
- System fault: bug that leads to an error
- System error: effects of the bug
- System failure: when system does not produce expected results
Availability vs reliability:
- Availability: P(can access the system)
- Reliability: P(failure-free operation)
- Subjective: some subsystems may be worse or issues occur only for some users at some times
Working as designed: specifications may be wrong (not what the user wants) or erroneous (typo)
Improving reliability:
- Fault avoidance: good design, process, tools
- Visibility: need-to-know principle
- Capture exceptions to prevent system failures
- Erring: avoiding or encapsulating dangerous constructs (e.g. untyped variables)
- Fault Detection: testing and debugging, validation (boundaries, input values)
- Assert statements
- Fault tolerance
- Protection systems that monitor the rest of the system
- Multi-version programming:
- Concurrent computation
- Dissimilar hardware
- Multiple dev teams
- Voting systems
- Recoverable milestones
- Constants: clearer code, compile-time verification
4 Rs for Resilience Engineering:
- Recognition of how resources may be attacked
- Resistance: strategies to resist threats
- Recovery: data, software, hardware recovery procedures
- Reinstatement: process of bringing the system back
Security:
- Avoidance: avoid storing sensitive data in plain text
- Detection: monitor for possible system attacks
- Recovery: backup, deployment, insurance
CI/CD
Make your builds disposable - updating to a new build should be so easy that it doesn’t matter if you need to do it once or 20 times. Requires deployment and rollback scripts to be written
Deployment:
- Build once; one less variable
- Deploy the same way everywhere
- Smoke test; make a few simple requests
- Deploy into a copy of prod; firewall, network, OS etc.
Paperwork:
- Involve all parties in charge of environments
- Create pipeline plan, config management strategy
- Environment variables and how to transfer them
- Monitoring requirements
- When third party systems become part of the testing
- Disaster recovery plans
- SLA
- Archiving strategy
Reducing downtime:
- Off-peak time
- Modularize code, migrate one by one
- Roll-back processes
Blue-Green:
- Two identical environments for each piece with router to determine which environment is used
Canary:
- Deploy to subset of servers and subset of users
- Gradually increase over time
UX
Honeycomb:
- Usable: easy to learn and use
- Useful: product serves a purpose to the user
- Findable: user can quickly navigate to where they want to go
- Credible: is the company and product trustworthy?
- Accessible: are accessibility features built into your product
- Desirable: does the UI look good and function well?
Misc:
- Whitespace is important; don’t overload the interface
- Consistent design and behavior
- Icons alone not enough - use text
- Feedback
- Success/error messages
- Loading bars
- Modals for critical information only
- Content: avoid long pages, messy information, use visuals/interactive content
Stages of Design
Wireframes: low-fidelity sketch; flow between screens. Be extra careful who you engage with; if you show the wireframes, everyone will have different nitpicks and complaints.
Mock-ups: high-fidelity representations based off of the wireframes; no interactivity. Focus on the visual identify.
Prototype: limited functional implementation (e.g. mocking) but with a functioning workflow. Concentrate on visual behavior, risky features.
Weekly Readings
- Basket of Options
- Reduce dependencies: if the first fails, you want to be able to still take advantage of the second
- Key Results: reduce sandbagging by expecting people to achieve only some of their goals, not all or none
- Stand-ups
- Goals
- Shared understanding of goals
- Coordination
- Share problems/improvements
- Team bonding
- Who
- Anyone involved in day-to-day operations, ensure they don’t disrupt stand-up, may be more helpful for some to view burn-down charts etc.
- Work items - story-focused stand-up. People speak for the work items. Not everyone needs to speak, hiding problems or shy people
- What
- 3 Q: accomplished yesterday, do today, obstacles. Order varies, may have additional questions e.g. code smells spotted
- Improvement board: public chart identifying obstacles and their progress. Avoid putting down problems the team has no control over
- Order
- Last first: encourages punctuality but likely to be unprepared
- Round robin: enforces notion of self-organizing team with no leader
- Pass the token: randomness encourages people to focus as when their turn is unknown. Difficult with larger teams
- Card: pass the token, but nothing to catch and no coffee to spill
- Walk the board: work items, not people attend. Move through work items ordered by stage reversed (e.g. review, then in progress) and priority (highest first). Blockers, emergency items and stuck items should go first. Danger of reporting to leader
- When/where
- Where the work happens, or in front of the story wall
- Same place/time; don’t wait for stragglers
- Start the day: difficult with flexible work hours. If not at the start of the day, trap of no work getting done until the stand-up
- < 15 minutes
- Signal ending
- Move discussions outside of stand-up (procedures such as consistent ‘take it offline’ phrase, raising hand)
- Autonomy; avoid having a leader: rotate facilitator, facilitator should avoid eye-contact to encourage speaker to talk to entire team
- Focus on tasks, not people: trap of focusing on what they are doing and not if the work they do has value
- Obstacles should be raised (forgetting, high ‘pain’ threshold, trust) and not just be raised in the stand-up, and take actions to remove them
- Goals
- Domain-Driven Design
- Draw the business problem: pseudo-UML diagram, boxes and lines etc.
- Code: model… models, go back and forth between the diagram and code. Notice but don’t try too hard to avoid framework/plumbing stuff polluting the model
- Co-design with domain experts. Note down the verbs and nouns they use; use it in your model
- The expert is right, the model is wrong
- Or the model is trying to solve multiple problems; split the model into two (there will be duplication), go through the process again
- The expert is right, the model is wrong
- Test Principles
- Fast: sub-second; long enough to lose focus, not long enough to start something
- Deterministic: policy of deleting non-deterministic tests?
- Sensitive to behavioral changes, insensitive to structural changes
- Cheap to write, read, change
- Code Reviews
- Waiting for feedback is a pain
- No one is a full-time reviewer
- Not counted as ‘actual’ work
- Not valuing good reviews
- Reviewer new to codebase, not known if someone else is reviewing
- Too big
- Not understanding motivation for change
- Bikeshedding - focusing on minor issues e.g. style and overlooking large ones
- Face-to-face meetings to reach consensus
- Communicating Architecture
- Architects spend time on:
- Internal work: deep work
- Inwards communication: listening, reading, asking questions
- Outwards communication: presenting, documenting, outputting information
- 50:25:25 is good balance
- Too much internal thinking - impractical even if structure is good
- Too much communication - consultant, no solid thinking behind architecture
- Async communication:
- Writing scales well (video etc. not often used professionally)
- Record - records decisions that were made
- Avoid focusing too heavily on diagrams - requires textual explanation
- Messaging: engineers need to understand architecture at a concrete level; failure of architect if this is not the case - i.e. burden of communication is on the sender
- Architects spend time on:
- (Not) Self-Documenting Code
- Self-explanatory names should tell you what is does or what it is
- Comments should focus on why and how (implementation details) it does it
- Comments are part of the code and should be updated in lock-step
- YAGNI
- When building something unnecessary for now, consider:
- Cost of building: how much time will it take to add the extra extensibility
- Cost of delay: how adding the feature will delay other features that would otherwise be ready and generating revenue
- Cost of carry: the extension points will make the system harder to work with
- Cost of repair: if the extension point was written wrong
- Cost of refactoring: will it really be that much work to add it in the future?
- When building something unnecessary for now, consider:
- TDD
- TDD is the fastest, best way to build software i.e. cheaper
- Rely on individual judgement
- Internal quality and productivity directly correlated
- Test a chain by its links: if each link works, then the whole chain must too
- Testing should steer design; consider testability as a factor when designing systems
- Code Coverage
- Acts as a reasonable, objective and well-supported metric of test quality
- Increased code coverage correlates with reduced defects - encouraging testability leads to better modularity etc.
- High code coverage alone does not mean quality tests
- … but low code coverage does mean code is untested
- Pick code coverage based on criticality of code, how often the code will be updated, how long the code is expected to be used for
- Frequently changing code should be covered; per-commit coverage should be high to ensure project coverage increases over time
- Aggregate full code coverage (unit, integration, system tests) to avoid thinking total coverage is higher than it actually is
- Diminishing returns as code coverage increases
- Legacy code base? Leave it cleaner than you found it
- Code coverage too low? Don’t deploy it. Ensure it can actually be met so that it doesn’t become a rubber stamp
- ACM Code of Ethics
- Act in the public interest
- Act in the best interests of the client/employer
- Product should meet highest professional standards
- Maintain integrity an independence in judgement
- Managers/leaders should promote ethical approaches to software development and management
- Act in the best interests (integrity/reputation) of the profession
- Be fair and supportive to colleagues
- Self lifelong learning, promote ethical approach
- Gebru Google Departure
- Wrote paper on unintended consequences of some NLP systems (including ones used in Google search) and environmental impacts
- Rejected by internal review for ignoring relevant research
- Gebru’s concerns not addressed, threatened to resign
- Sent internal memo criticizing, fired by Google
- Gender Differences and Bias in Accepted Open Source Pull Requests
- Women’s pull requests accepted more often then men when not identified as women
- Theories: survivorship bias, self-selection bias, women being held to higher standards
- Women’s pull requests accepted more often then men when not identified as women
- Git Flow Branching
- Master
- Always production-ready
- Dev
- Feature
- Pull and push from/to dev
- When merging use
--no-ff(no fast-forward): makes it easier to revert features
- Release
- Branched off dev
- Can get bug fixes
- Pushed back to dev
- Merge into master
- Hotfix
- Branched off master
- Changes pushed to master and dev
- Master
- Git Rebase
- Reapplies all commits to the tip of another branch
- Previous commits exist but aren’t accessible
- If remote branch exists, force push required
- Never rebase a shared branch - requires a lot of merges and duplicated commits
- Chaos Engineering
- Partition the system into a control and experimental group
- Yes, in production
- Ensure the blast radius is minimized
- In the experimental group, add variables simulating crashes, network disconnects, large traffic spikes etc.
- Prioritize by impact and frequency
- Look for a difference between the two groups (and hope the control group hasn’t crashed)
- Partition the system into a control and experimental group
- Be Kind in Code Review
- Assume good faith
- Comment on code, not developer
- Don’t use ‘obviously’/‘clearly’
- Be clear - assume low-context culture
- If code needs to be explained by author, it probably needs to be rewritten to be more clear
- Code reviewer has power. If abused, can lead to current contributors to become de-motivated and scares away new contributors. Leads to fewer, less diverse set of contributors and slower progress on the code front
- Assume good faith
- Writing Pull Requests
- Plan the change
- Talk to others - gives them context and allows solutions to be brainstormed
- Pick relevant reviewers. They should have:
- Worked on it
- Worked on something related to it
- Understand what’s being changed
- Explain - summary and description of change
- Give context (e.g. issue tracker link)
- Long != good
- Guide readers; where is the most important change? What is just method renaming?
- Small:
- Don’t mix in unrelated changes
- Isolate related into multiple merge requests if possible
- Ready:
- Ensure it meets DoD
- Once feedback received, make a new merge request
- Plan the change
- Rubber Duck Debugging
- Explain the code to the duck line by line
- Realize the code wasn’t actually doing what you thought it was doing
- Thank the duck
- Questions to Ask Bugs
- What is the pattern?
- Where else does it exist? Where are its siblings? Are there parallel paths that have the same pattern? Commit genocide
- What is its impact?
- Fallout to users
- Cost in productivity
- Follow-up with users, team, stakeholders
- Preventing more bugs:
- Why did it get through your existing process? What can be changed?
- Can that class of bug be removed?
- What is the pattern?
Design Principles
Cohesion: data + behavior together.
Coupling: information hiding, separation of concerns, independence between modules
Push and pull between keeping related data together and separation of concerns.
Biggest issue in software design: complexity. Solution: decomposition
Nucleus-centred (OO) design: decide what the critical core of the program is and build interfaces around it. The core should be constant while details that have competing solutions are behind interfaces.
Information Hiding:
- Encapsulation = programming feature to create boundary between
- Information hiding = hiding design decisions to prevent unintended coupling
- e.g. list being returned - elements can be added or modified without knowledge of the owner, possibly causing invalid state
Tell, Don’t Ask:
- If a decision is based entirely on the state of one object, it should be made in that object, not outsourced
- Avoid asking for information from an object in order to make decisions about it
- Encourages cohesion; related data and behavior together
Composition over Inheritance:
- Avoid inheritance for:
- ‘is a role of’: store role in a separate object (and maybe have a list of roles)
- ‘becomes’
- Inheritance isn’t dynamic; changing class when some trivial detail changes is not great
- Changing the superclass contract
- If it can change, it ain’t inheritance
- Inheritance should be for ‘is a’ relationship
- Composition hides implementation details
Over-specialization: use the most general interface you can.
SOLID:
- Single responsibility principle
- Every module/class should have responsibility over one part of the functionality (and should be fully encapsulated)
- Bigger = more reasons to change, bigger blast radius when changes are made
- Open-closed: make your system open for extension, closed for modification
- Liskov-substitution principle: behavior shouldn’t change depending on subclass
- Interface-Segregation Principle: clients shouldn’t need to depend on methods/interfaces they don’t use
- Split interfaces into smaller, more cohesive interfaces
- e.g. class implementing interface having lots of
UnsupportedOperationExceptions
- Dependency-inversion: big things should not rely on little things (and vice-versa) - depend on abstractions instead
- New is glue
Design Patterns
A solution to a problem in a context. They are:
- Distilled wisdom about a specific problem
- A reusable design micro-architecture
Defining a pattern:
- Empirical evidence
- Rule of 3: at least three independent usages before calling it a pattern
- Comparison with other solutions
- Independent authorship
- Reviewed by pattern/domain experts
Documenting:
- Name
- Intent: a brief synopsis
- Motivation: context of the problem
- Applicability: circumstances under which the pattern applies
- Structure: class diagram of solution
- Participants: explanation of classes/objects and their roles
- Collaboration: explanation of how classes/objects cooperate
- Consequences: impact, benefits, liabilities
- Implementations: techniques, traps, language-dependent issues
- Sample code
- Known uses: well-known systems already using the pattern
Documenting instances:
- Map each participant in the GoF pattern to its corresponding element
- Interface/abstract
- Concrete
- Association
- Name
- Intent