Once an algorithm is shown to be correct, consider the:
- Efficiency in time and space
- Big O
- Big Omega
- Big Theta
- Complexity in terms of the size of the input
- Worse, average, and best case
- Asymptotic analysis: limn→∞time(n)
The time taken will depend on the instance of the problem being solved.
Abstract random access machine
- Accessing any byte in memory takes constant time
- Basic operations (arithmetic, comparison, assignment) take one unit of time
- Time inside loops proportional to the number of iterations
- Subroutine time is the time to execute its body
O(G)=f∣∃c>0:∃x0:∀x≥x0:0≤f(x)≤c⋅g(x)
That is, f(x) is part of the set O(g) if there exists a positive multiplier c such that for every value of x greater than x0, f(x)≤c⋅g(x).
Ω(G)=f∃c>0:∃x0:∀x≥x0:0≤c cdotg(x)≤f(x)
That is, f(x) is part of the set Ω(g) if there exists a positive multiplier c such that for every value of x greater than x0, c⋅g(x)≤f(x).
Ω(g)=O(g)∩Ω(g)
That is, f(x) is part of the set Θ(g) if it is such that for every value of x above x0, there exists c1 and c2 such that that c1⋅g(x)≤f(x)≤c2⋅g(x).
The worst case and best case are functions: the algorithm is not, so we cannot define the algorithm by Big O/Omega/Theta notation.
The best case of an algorithm will be Θ(g); the algorithm is not Θ(g). Ideally, an algorithm would be separately categorized by its best and worst case. In practice, we often categorize the algorithm by its worst case.
Common functions, sorted by complexity descending:
| Functions |
|
x!
|
|
ax
|
|
xa
|
|
xlog(x)
|
|
x
|
|
log(x)
|
|
a
|